Markov chains with heavy-tailed increments and asymptotically zero drift

We study the recurrence/transience phase transition for Markov chains on $\mathbb{R}_+$, $\mathbb{R}$, and $\mathbb{R}^2$ whose increments have heavy tails with exponent in $(1,2)$ and asymptotically zero mean. This is the infinite-variance analogue of the classical Lamperti problem. On $\mathbb{R}_+$, for example, we show that if the tail of the positive increments is about $c y^{-\alpha}$ for an exponent $\alpha \in (1,2)$ and if the drift at $x$ is about $b x^{-\gamma}$, then the critical regime has $\gamma = \alpha -1$ and recurrence/transience is determined by the sign of $b + c\pi \textrm{cosec} (\pi \alpha)$. On $\mathbb{R}$ we classify whether transience is directional or oscillatory, and extend an example of Rogozin \&Foss to a class of transient martingales which oscillate between $\pm \infty$. In addition to our recurrence/transience results, we also give sharp results on the existence/non-existence of moments of passage times.


Introduction
Lamperti's problem describes how the asymptotic behaviour of a non-homogeneous random walk (Markov chain) ξ n on R + whose increments have at least 2 moments is determined by the interplay between the increment moment functions µ(x) := E[ξ n+1 − ξ n | ξ n = x], and s 2 (x) := E[(ξ n+1 − ξ n ) 2 | ξ n = x]. (1.1) Lamperti [8][9][10] showed, among other things, that the critical regime for the recurrence or transience of ξ n is when xµ(x) is comparable to s 2 (x). For further background and results, see [2,12] and the references therein; the continued interest of Lamperti's problem is due in part to its nature as a prototypical near-critical stochastic system.
In this paper, we study the heavy-tailed case where s 2 (x) = ∞, but µ(x) is still well-defined. Under mild conditions, if lim sup x→∞ µ(x) < 0 then the Markov chain on R + is positive recurrent, while if lim inf x→∞ µ(x) > 0 then the chain is transient (see e.g. Theorems 2.6.2 and 2.5.18 of [12]). Thus the case of central interest is the asymptotically zero drift regime where µ(x) → 0 as x → ∞.
As well as classifying recurrence and transience, we quantify the recurrent cases by determining which moments of return times to a bounded set are finite.
In addition to considering chains on R + , we also consider chains on R, and give an example on R 2 . On R, the situation is richer than on R + , and also than in the case where s 2 (x) = O(1), since oscillatory transience can occur, when lim inf n→∞ ξ n = −∞ and lim sup n→∞ ξ n = +∞, but nevertheless lim n→∞ |ξ n | = ∞. In the case of zero drift (µ(x) = 0), on R + the chain is recurrent under mild conditions (see Example 2.5.6 of [12]) but on R zero drift does not imply recurrence when s 2 (x) = ∞ (cf. Theorem 2.5.7 of [12]). Rogozin & Foss [13] gave a concrete example on R of a zero-drift chain that is transient. The example has heavy-tailed jumps, with exponent β ∈ (1, 3/2], inwards to the origin from both the left and right half-lines. We show (Theorem 2.7 below) that the transience in this case is oscillatory and give general conditions for behaviour of this kind, and also show how an asymptotically zero drift perturbs the picture. Again, in the recurrent cases we also study moments of passage times.
It is worth emphasising that for all our results the Markov property is not essential, and can be dispensed with without complicating the proofs, only the notation: cf. [8] or Chapter 3 of [12].
Apart from being interesting in their own right, stochastic processes on the positive half-line are important in the study of higher-dimensional processes via the Lyapunov function method (see e.g. [1,4,8,12]): for a Markov chain ζ n on R d , the analysis typically proceeds by considering a one-dimensional projection ρ : R d → R + of the form ρ(x) := x ν , for some positive constant ν; then ξ n = ρ(ζ n ) is a (not necessarily Markov) process on R + and recurrence/transience of ζ n and ξ n coincide. As a first step into higher dimensions, we give an example of a heavy-tailed random walk on R 2 and show how our approach can be used to provide conditions for recurrence and transience.
We briefly mention other relevant literature. The case of balanced tails (when the left and right jumps have the same exponent) has been studied in several papers by Sandrić in both discrete and continuum settings: see [14][15][16] and references therein. The only previous results on passage time moments that we know of are due to Doney [3] in the case of a zero-drift, spatially homogeneous random walk, and a subset of the authors [11]; these include some special cases of our results, but not the asymptotically zero drift regime. The case where even µ(x) is not well-defined is quite different: see [7,11] and references therein.
Transience is typically quantified either via almost-sure growth bounds on trajectories or estimates of last-exit times. In a related context, results of both types are studied in [7] for chains with non-integrable increments. Although not directly applicable, we believe the techniques used could be adapted to this setting. Section 2 presents our main results, working in turn through the cases on R + and R, and finally giving an example on R 2 and an open problem for a reflected random walk in a quadrant with heavy tails. Section 3 contains the tools that we need to determine the asymptotic behaviour of our processes via the method of Lyapunov functions. These include criteria for directional and oscillatory transience. Section 4 carries out the analysis of our Lyapunov functions. Section 5 contains the proofs of the main results. Technical results on integral computations needed for our Lyapunov-function estimates are collected in the Appendix.

Notation
For all the models in this paper, we use the following notation. Let X be an infinite, unbounded, measurable subset of R d (d ∈ N), equipped with its own σ-algebra X . To avoid unnecessary complications with conditioning, we suppose in the sequel that the measurable space (X, X ) is a standard Borel space. Let Ξ := (ξ n , n ∈ Z + ) be a timehomogeneous X-valued Markov process. Here and elsewhere, N := {1, 2, 3, . . .} and Z + := N ∪ {0}. The Markov process Ξ is determined by its Markov kernel P : X × X → [0, 1], specifying, as usual, the conditional law P (x, A) = P(ξ n+1 ∈ A | ξ n = x), for x ∈ X and A ∈ X . By the Ionescu Tulcea theorem, the kernel P and the law of ξ 0 uniquely determine a probability measure P on the space X Z+ that can be disintegrated into the conditional measures P x [ · ] = P[ · | ξ 0 = x] for all deterministic initial conditions. We write E x for the expectation corresponding to P x . For n ∈ Z + , define the increment θ n+1 := ξ n+1 − ξ n . Consistently, the transition kernel of the chain is recovered from the law of θ 1 given ξ 0 ; to ease notation, we write θ for θ 1 .

Walks on the half line
We start with X ⊆ R + . We say that Ξ is transient if lim n→∞ ξ n = ∞, a.s., and that Ξ is recurrent if lim inf n→∞ ξ n ≤ r 0 , a.s., for some deterministic r 0 ∈ R + . For a ∈ R + define τ a := min{n ∈ Z + : ξ n ≤ a}. If Ξ is recurrent then P x [τ a < ∞] = 1 for any a > r 0 and any x; if, moreover, E x [τ a ] < ∞ for all a sufficiently large and any x, we say that Ξ is positive recurrent, while if for all a sufficiently large and all x > a we have E x [τ a ] = ∞, then we say that Ξ is null recurrent. We typically (but not always) assume the following.

Remarks 2.2.
(i) Note that cosec(πα) < 0 for α ∈ (1, 2), so part (i) says that the case µ(x) ≤ 0 is recurrent, as one would expect, and shows what magnitude of positive drift must be added to achieve transience.
For the rest of the results that we present, we will also assume an asymptotic form for the drift. This is partly for simplicity of statement, but also necessary to get sharp transitions in our existence-of-moments results. The assumption is as follows.
In the recurrent case, we can precisely quantify the recurrence by showing which moments of passage times exist. Here and elsewhere Γ denotes the (Euler) gamma function. Theorem 2.3. Suppose that the Markov chain Ξ on R + satisfies (T α,β,c ) and (D γ,b ). Let q > 0. Then for all a sufficiently large and all x > a, the following hold.
(ii) If Ξ is a martingale satisfying assumptions (N) and (T α,β,c ), then (D γ,b ) is automatically satisfied (for b = 0 and any γ). Then Ξ is recurrent by Theorem 2.1(i), and Theorem 2.3(i) shows that the critical exponent for τ a is 1/α ∈ (1/2, 1); in the case of a spatially homogeneous random walk, this fact is essentially contained in a result of Doney [3]. In this sense, such a martingale with heavy tails away from the origin is more recurrent than simple symmetric random walk on R + , which has critical exponent 1/2 (see, e.g., [12, Example 2.7.6]) but is still null recurrent.

Walks on the whole line: heavier outward tails
Now we turn to the case X ⊆ R. We first suppose that for x > 0, essentially the same conditions (T α,β,c ) and (D γ,b ) as above hold, while for x < 0 the symmetric version of those conditions holds, so that the outwards increment always has a heavier tail than the inwards one. This case closely corresponds to the walks on R + of the previous section. In full, the assumptions are as follows. We extend the definition of µ(x) := E x [θ] to x ∈ R.
In the following theorem only directional transience appears; we will see instances of oscillatory transience in the next sections. Theorem 2.5. Suppose that the Markov chain Ξ on R satisfies (N ), (T out α,β,c ), and (D γ,b ). Then the following classification holds.
The moments result here is essentially the same as that on R + from Theorem 2.3. Theorem 2.6. Suppose that the Markov chain Ξ on R satisfies (T out α,β,c ) and (D γ,b ). Then the statements in Theorem 2.3 hold verbatim, given that τ a = min{n ∈ Z + : |ξ n | ≤ a}.

Walks on the whole line: heavier inward tails
Again with X ⊆ R, we now flip things around so that the inwards increment has the heavier tail. Here more interesting phenomena occur. Our assumptions are as follows.
Our recurrence classification in this case is as follows.
If assumption (T + δ ) is also satisfied, then the following transience criteria hold.

Remarks 2.8.
(i) For zero drift (b = 0), the phase transition at β = 3/2 was first observed by Rogozin & Foss [13], assuming that the law of θ depends only on the sign of ξ 0 .
Remark 2.10. In the special case γ = 0 and assuming that no jumps away from the origin occur, the conclusion of Theorem 2.9(iii) coincides with Theorem 6(i) of [11].

Walks on the whole line: the balanced case
For the last of our models on R, the inwards and outwards tail exponents coincide.
If in addition, the limit assumptions in (T bal α,c ) are strengthened to O(y −δ ) for some δ > 0, then the following transience condition also holds.
(v) If γ = α − 1 and b + cπ cot( πα 2 ) > 0 then Ξ is oscillatory transient. Theorem 2.12. Suppose that the Markov chain Ξ on R satisfies (T bal α,c ) and (D γ,b ). Let q > 0. Then for all a sufficiently large and all x > a, the following hold.
Remarks 2.13. (i) In the special case where the distribution of θ is symmetric (so the drift is zero), the conclusion of Theorem 2.12(i) coincides with Theorem 4(ii) of [11].
(ii) Sandrić [14,16] considers symmetric or near-symmetric increment distributions, but allows the exponent α(x) to vary with position x ∈ R. In particular, if α(x) ∈ (1, 2) is bounded away from 1 and 2, then [14,16] give sufficient conditions for recurrence, transience, and positive recurrence that generalize, under some additional conditions, the relevant parts of our Theorem 2.11 (see the discussion in [16, pp. 465-466]). Similar results for an analogous continuous-time model (Feller process) are obtained in [15].

Walks in higher dimensions: an example
We present a family of martingales on R 2 with heavy-tailed jumps that contains both recurrent and transient walks. The example is deliberately simplistic for expository purposes; the phenomenon could certainly be reproduced for more naturally motivated random walks. Similarly, although the walks we describe here are 2-dimensional, this example can easily be extended to any dimension d ≥ 2. Let 0 denote the origin of R 2 . For x ∈ R 2 \ {0} write u x := x/ x , and write v x for the unit vector perpendicular to u x obtained by rotating u x anticlockwise by π/2. Our Markov chain Ξ on X ⊆ R 2 is constructed so that given ξ 0 = x = 0 the jump θ is given by θ = χu x θ R + (1 − χ)v x θ T , where χ ∈ {0, 1}, and θ R , θ T ∈ R are independent of χ. If χ = 1 we say that θ is a radial jump, and if χ = 0 we say that θ is a transverse jump. We also assume that P 0 [θ = 0] < 1. We suppose that positive radial jumps are heavy tailed, that negative radial jumps have bounded moments, and that the transverse jumps are symmetric and also have heavy tails. Specifically, we assume the following.

Remarks 2.15.
(i) The theorem shows how the balance between radial and transverse increments determines the recurrence behaviour of zero-drift random walks; for walks whose increments have two moments, the analogous phenomenon is driven by the increment covariance matrix, as described for example in [5] or [12, §4.2].
(ii) Since cos( πα 2 ) < 0 for α ∈ (1, 2), there are walks exhibiting either behaviour for any α ∈ (1, 2). Indeed, both the ratio c R /c T and the ratio p R /p T can take any value in (0, ∞), so for fixed α ∈ (1, 2) the walk is recurrent if either ratio is taken large enough, and transient if either ratio is taken small enough.

Walks in higher dimensions: an open problem
We finish this section with an open problem concerning a partially homogeneous random walk Ξ on X = Z 2 + . Partition Z 2 + into sets I : then also important is the covariance of θ in region I: see pp. 56-58 of [4], or [1].
What about when M 0 = 0 but the second moment is infinite? Concretely, suppose that for c 1 , c 2 > 0 and α 1 , α 2 ∈ (1, 2), for j ∈ {1, 2} and y ∈ N, Thus the jumps of Ξ are heavy-tailed away from the axes, but bounded towards the axes. The answer to Problem 2.16 may well depend on M 1 , M 2 , the c j and α j , and also on some analogue of the increment covariance matrix in the heavy-tailed setting.

Semimartingale criteria for real-valued processes
Central in proving our main results is the Lyapunov function methodology: we find a function f so that f (ξ n ) satisfies a suitable semimartingale condition, which implies the desired properties of Ξ. In this section we collect the semimartingale criteria that we need; mostly this involves presenting known results, but a little work is needed to get statements in the form that we want for establishing directional or oscillatory transience.
All of these results are stated for a real-valued stochastic process (X n , n ∈ Z + ) adapted to a filtration (F n , n ∈ Z + ). This generality, without assuming that X n is Markov, is useful so that we can apply the results to e.g. X n = ξ n for ξ n ∈ R 2 the model in Section 2.6. First we present recurrence and transience criteria for processes on R + ; these results are Theorems 3.5.8 and 3.5.6(ii) in [12].

Lemma 3.2 (Transience criterion on
Then lim n→∞ X n = ∞, a.s. Now we state the following two criteria for processes on R that apply to 'two-sided' Lyapunov functions. The recurrence criterion is Lemma 5.3.15 in [12] and the transience criterion follows from Lemma 5.3.16 in [12] as in the proof of Theorem 5.3.1 there.

Lemma 3.3 (Recurrence criterion on R).
Suppose that (X n ) is an (F n )-adapted process taking values in R and lim sup n→∞ |X n | = ∞, a.s. Let f : Then lim inf n→∞ |X n | ≤ x 1 , a.s.

Lemma 3.4 (Transience criterion on
Suppose also that there exists x 1 ∈ R + for which, for all n ≥ 0, For processes on R, we also have criteria for directional or oscillatory transience. Lemma 3.5 (Directional transience criterion). Suppose that (X n ) is an (F n )-adapted process taking values in R and lim sup n→∞ |X n | = ∞, a.s. Let f : R → R + be such that sup x f (x) < ∞, lim x→+∞ f (x) = 0 and inf y≤x f (y) > 0 for any x ∈ R + . Suppose also that there exists x 1 ∈ R + for which for all n ≥ 0, Then lim n→∞ X n ∈ {−∞, +∞}, a.s. Lemma 3.6 (Oscillatory transience criterion). Suppose that (X n ) is an (F n )-adapted process taking values in R and lim n→∞ |X n | = ∞, a.s. Let f : Then lim inf n→∞ X n = −∞ and lim sup n→∞ X n = +∞, a.s.
We give the proofs of these two results. A key step in the proof of Lemma 3.5 is the following hitting probability estimate.
Proof. The idea is standard: the proof of Lemma 3.5.7 of [12], although that result is stated for processes on R + , carries over directly to this case.
Proof of Lemma 3.5. Under the conditions of the lemma, it is clear that the hypotheses of Lemma 3.7 hold for any x 2 with x 2 > x 1 > 0 and for both (X n ) and (−X n ). So for any x 2 > x 1 and ε > 0 there exists x ∈ (x 2 , ∞) for which, for all n ≥ 0, In other words, Given lim sup n→∞ |X n | = ∞, a.s., we have P[σ x < ∞] = 1, and since ε > 0 was arbitrary, Then, since x 2 > x 1 was also arbitrary, with the fact that lim inf n→∞ X m ≥ x and lim sup n→∞ X m ≤ −y are mutually exclusive for x, y > 0, the result follows.
Now we turn to the proof of Lemma 3.6. First, we give a variation on Lemma 3.1 for processes on R; the proof from [12, p. 113] carries across directly to this setting.
Finally, we state two results that provide general conditions for the existence and nonexistence of certain moments of passage times. These are reformulations of Theorem 1 and Corollary 1 of [1] (see also [11, §6.1] or [12, §2.7]). Lemma 3.9. Let (Y n ) be an integrable (F n )-adapted stochastic process, taking values in an unbounded subset of R + , with Y 0 = y 0 fixed. For x > 0, let σ x := inf{n ≥ 0 : Y n ≤ x}. Suppose that there exist δ > 0, x > 0 and κ < 1 such that for any n ≥ 0, Then for any p Suppose that there exist C 1 , C 2 > 0, x > 0, p > 0 and r > 1 such that for any n ≥ 0, on {n < σ x } the following hold: Then for any q > p, E[σ q x ] = ∞ for y 0 > x.

Lyapunov function calculations 4.1 Preliminaries
For the case X ⊆ R + , we use the Lyapunov function f 0 : The truncation at 1 is only necessary for ν < 0, but for convenience we define f 0 as above for all ν ∈ R. For processes on R we will use two related, but different, extensions of f 0 to the whole of R. These are defined for ν ∈ R as follows.
The 'two-sided' function f 2 will be used to establish recurrence (with ν > 0) and transience (ν < 0); the 'one-sided' function f 1 will be used for distinguishing between directional (ν < 0) and oscillatory (ν > 0) transience. Define (4.1) Our first estimate for the D i will be useful when the drift is dominant. In the calculations here and in the rest of the paper, various constants C < ∞ will appear, whose precise value is not important, and may change from line to line.

Lyapunov function on the half line
Lemma 4.1 is only useful for large drift; otherwise, we must use the tail assumptions on the increments to evaluate more precisely the other contributions to D i . First we consider X ⊆ R + ; the calculations in this setting will be a model for the other cases. Set Note that ν → κ 0 (ν) is continuous on (−∞, α).

Lyapunov functions on the whole line
In the case of heavier outwards tails, the computations for f 1 and f 2 are naturally related to those that we did for f 0 in the last section, so we can reuse many calculations here.

Lemma 4.3.
Suppose that the random walk Ξ on R satisfies (T out α,β,c ). Then, for any ν for which α − β < ν < α, the following hold. First, as x → +∞, where κ 0 is defined at (4.2). Second, as x → ±∞, Moreover, since f 0 (x) = f 1 (x) = f 2 (x) for all x ≥ 0 (and a fixed ν), conditional on ξ n = x it is clearly the case that f i (x+θ + ) and f i (x − θ − )1{θ − ≤ x} do not depend on which i ∈ {0, 1, 2} we are using. Thus the only difference from our computation in Lemma 4.2 arises from the possibility now that θ − > x (which was previously precluded).
In the places in the proof of Lemma 4.2 where θ − is allowed to be big, we used only that (i) f 0 (x) = 1 for x ≤ 1, (ii) f 0 (x) ∈ [0, 1] for all x ∈ R + if ν < 0, and (iii) f 0 (x) is non-decreasing for x ∈ R + if ν > 0. All of (i)-(iii) extend to x ∈ R with f 1 in place of f 0 . Thus the proof of the result for f 1 follows verbatim that of Lemma 4.2, replacing f 0 by f 1 , and noting that some statements should be extended from R + to R.
Consider f 2 . Suppose that we can show that (4.11) holds for x → +∞. If assumption (T out α,β,c ) holds for ξ n , it also holds for −ξ n . Then by the symmetry f 2 (−x) ≡ f 2 (x), we may apply (4.11) to the process −ξ n , to get that is, as x → +∞, equal to the right-hand side of (4.11) but with E x [θ] replaced by − E −x [θ]. This shows that (4.11) also holds for x → −∞. Thus it suffices to prove (4.11) for x → +∞; so we take x ≥ 1, as in the first paragraph of this proof.
We describe how to modify the proof of Lemma 4.2 to obtain (4.11). For ν = 1, the analogue of (4.4) is as before.
Now suppose that α − β < ν < α with ν / ∈ {0, 1}. Following the proof of Lemma 4.2 exactly, we can show that equation (4.8) holds for f 2 in place of f 0 . Now consider dealt with in exactly the same way as in the proof of Lemma 4.2, so to show that the analogue of equation (4.10) holds for f 2 , it is enough to prove that, as x → +∞, If ν < 0 this follows in the same way as (4.9), since f 2 (x) ∈ [0, 1] for all x ∈ R. If ν > 0, because β > α. This proves (4.12), and hence also the analogue of (4.10) for f 2 . The remainder of the proof exactly follows the proof of Lemma 4.2.
Proof. As in the proof of Lemma 4.3, for both results it suffices to suppose that x ≥ 1 (and later to take x → +∞). Also, without loss of generality, we may suppose that 1 < β < α < 2. We can treat f 1 and f 2 together for most of the computations. Indeed, let i ∈ {1, 2}. Then, by the monotonicity of x → f i (x) for x ≥ 1, Suppose that ν < β. Markov's inequality with assumption (T in α,β,c ) implies that, for x > x 0 , , and since ν < β < α we also have Using (4.5), and the fact that (T in α,β,c ) holds for some α ∈ (1, 2), we can bound this expression by .

The half line
First we establish our recurrence criteria.
Similarly, if (2.2) holds then we can find ν < 0 such that the right-hand side of (5.1) is again negative for all x sufficiently large, but now with f 0 (x) → 0 as x → ∞. Hence Lemma 3.2 implies transience, again noting (N).
Before proving Theorem 2.3 on moments of passage times, we state a lemma that we will need for our non-existence-of-moments results in the case where the drift is dominant.
Proof. Suppose that x ≥ x 0 . The idea is that the chain may start with a very big jump, and then takes a long time to return to near 0 (a similar idea was used in e.g. the proof of Theorem 2.10 of [7]). Fix η = 1 + γ, and for x, y ∈ R + let w y (x) = (y − x) η 1{x < y}. We claim that there exist y 0 ∈ R + and B < ∞ (not depending on y) such that E[w y (ξ n+1 ) − w y (ξ n ) | ξ n = x] ≤ B, for all y ≥ y 0 and all x ≥ y/2. for all y ≥ y 0 and all n ≥ 1. Let a ≥ x 0 . Then for x > a and any A ∈ (0, ∞), Combining (5.4) and (5.5) and using (T α,β,c ), we get P x [τ a > n] ≥ c n −α/η for some c > 0 and all n large enough, uniformly in x > a ≥ x 0 . Since η = 1 + γ, we get E x [τ q a ] = ∞ for x > a and q ≥ α 1+γ .
Putting these pieces together we get by choice of η and the assumption on µ(x). This completes the proof of (5.2).
i.e., solving (2.3), such that the following two statements hold.

The whole line
First we deal with the case of heavier outwards tails. Recall the definition of κ 0 at (4.2).
Finally, if γ < α − 1 then for all x sufficiently large, D 1 (x) and D 1 (x) have the same sign as νb. For b > 0 we can choose ν < 0 such that Lemma 3.5 with function f 1 implies directional transience. For b < 0, we can take ν = 1 + γ ∈ (0, α) to see by (5.8) that D 2 (x) ≤ −ε for all x outside of a bounded set, so Foster's criterion (e.g. Theorem 2.6.2 of [12]) gives positive recurrence. This proves parts (ii) and (iii) of the theorem.
To show that ν = ν , we adapt an idea from Lemma 11 in [11].
In the critical case transience is again oscillatory, since in D 1 (x) the key quantity is ν(b + cκ 0 (0)) + κ 1 (0) which is −1 + O(ν). We omit the details, as they are similar to the previous proofs.
Finally, for the case γ < α − 1 and b < 0, part (iii) follows as in previous proofs, the argument for Lemma 5.1 also working in the balanced case.

Example in the plane
Proof of Theorem 2.14. For ν ∈ R we define the Lyapunov function g : R 2 → R + by g(x) := x ν for x ≥ 1, Suppose throughout that x ≥ 1. Using assumption (A), we can write Considering first the case of a radial jump, since x = x u x , we have In the same way that Lemma 4.3 follows from the assumptions (T out α,β,c ) and (D γ,b ) of Section 2.3, here assumptions (B), (C), and (D) imply that for any ν ∈ (α − β, α), (Note that the process ξ n does not itself satisfy assumptions (T out α,β,c ) and (D γ,b ) of Section 2.3, as the law of its increments is not sufficiently uniform in x.) We turn to the transverse jumps. Let ε ∈ (0, 2 − α).
Since cosec(πα) < 0 for α ∈ (1, 2), if p R c R + 2p T c T cos(πα/2) > 0 then there exists a ν > 0 for which E[g(ξ n+1 ) − g(ξ n ) | ξ n = x] ≤ 0 for all large enough x , and we get the recurrence in part (i) of the theorem by Lemma 3.1 applied to X n = ξ n with f (x) = f 0 (x) (which tends to ∞). On the other hand, if p R c R + 2p T c T cos(πα/2) < 0, then there exists a ν < 0 for which again E[g(ξ n+1 ) − g(ξ n ) | ξ n = x] ≤ 0 for all x large enough, and we get the transience in part (ii) of the theorem by Lemma 3.2 applied to X n = ξ n with f (x) = f 0 (x) (which now tends to 0).

A An integration by parts formula
The following lemma, based on the partial integration formula for the Riemann-Stieltjes integral, is a mild generalization of Theorem 2.12.3 of [6].
Lemma A.1. Let X ≥ 0 be a random variable, and g : R + → R a continuous function. Let the finite partition a 0 = 0 < a 1 < · · · < a k+1 = ∞ of R + be such that g is monotonic on [a i , a i+1 ) and differentiable on (a i , a i+1 ) for each i = 0, . . . , k. Then, for any a ≥ 0, where both sides converge or diverge simultaneously.
Proof. By assumption, for each i = 0, . . . , k − 1, the function g is continuous on [a i , a i+1 ] and E[g(X)1{a i < X ≤ a i+1 }] is finite and, using e.g. Theorem 2.9.3 of [6], is equal to If a < a k then a ∈ [a j , a j+1 ) for some j < k, and similar reasoning gives Consequently, the result follows if E[g(X)1{X > a k }] = g(a k )P[X > a k ] + ∞ a k g (x)P[X > x]dx and both sides converge or diverge simultaneously. In other words, without loss of generality, we may assume that a ≥ a k .
Under this assumption, we may also assume that g is non-negative and non-decreasing on [a, ∞), by passing to ±(g(x) − g(a)) as necessary, and therefore for all b > a, E[g(X)1{X > a}] ≥ E[g(X)1{a < X ≤ b}] + g(b)P[X > b] ≥ E[g(X)1{a < X ≤ b}].
Applying the partial integration formula to the middle term above, we obtain E[g(X)1{X > a}] ≥ b a g (x)P[X > x]dx + g(a)P[X > a] ≥ E[g(X)1{a < X ≤ b}]. and the result follows by taking the limit b → ∞, since the right hand side tends to the (possibly infinite) limit E[g(X)1{X > a}] by monotone convergence.

B Some useful integrals
Recall that the beta function B(p, q) is given by B(p, q) = Γ(p)Γ(q) Γ(p + q) = B(q, p).
The incomplete beta function B x (p, q), defined for 0 ≤ x ≤ 1 by the integral is usually only defined for p, q > 0, in which case B 1 (p, q) = B(p, q). However, the integral in (B.1) is finite for any q ∈ R provided that x < 1 and p > 0, so we extend the definition of B x (p, q) to this full range of the parameters. Note that then The following recurrence relation is well known, but since it is usually assumed that q > 0, we repeat the proof here.
Proof. Since p > 0, we can write B x (p, q) = B x (p, q + 1) + B x (p + 1, q), where all three terms are finite. Evaluating qB x (p + 1, q) using integration by parts, yields qB x (p + 1, q) = −x p (1 − x) q + pB x (p, q + 1), which when combined with the previous identity gives the result. 3) is easily proved when p > 0 and q > 0 (direct evaluation with the beta integral) or when p > −1, p = 0 and q > 1 (integration by parts); however, the region −1 < p < 0 and 0 < q < 1 is not covered by either of these cases.
We can now collect the statements that we will need in the body of the paper. The first, Corollary B.5, is for q > 0 no stronger than the identity (B.3) above, but for q ∈ (−1, 0) it quantifies the rate of divergence of the integral as x → ∞.
The remaining two results are more straightforward.
Lemma B.6. For p, q ∈ R with q > −1, q = 0, and p + q < 1, as x → ∞, Proof. Integrating by parts, we find which is valid for any x > 0, since p + q < 1. The change of variable v = u −1 yields