Strong transience for one-dimensional Markov chains with asymptotically zero drifts

For near-critical, transient Markov chains on the non-negative integers in the Lamperti regime, where the mean drift at $x$ decays as $1/x$ as $x \to \infty$, we quantify degree of transience via existence of moments for conditional return times and for last exit times, assuming increments are uniformly bounded. Our proof uses a Doob $h$-transform, for the transient process conditioned to return, and we show that the conditioned process is also of Lamperti type with appropriately transformed parameters. To do so, we obtain an asymptotic expansion for the ratio of two return probabilities, evaluated at two nearby starting points; a consequence of this is that the return probability for the transient Lamperti process is a regularly-varying function of the starting point.


Introduction
A transient, irreducible Markov chain X = (X 0 , X 1 , . ..) on a countable state space S has P(τ = ∞) ∈ (0, 1), where τ is the first return time to a given state.Such a chain is strong transient if, moreover, E[τ | τ < ∞] < ∞.The concept of strong transience goes back at least to Port [33].It is known (see Lemma 2.2 below) that the strong transience condition E[τ | τ < ∞] < ∞ is equivalent both to (a) ∞ n=1 n P(X n = x) < ∞ (for x ∈ S) and to (b) the last exit time from any state being integrable.For example, in the case where X is simple symmetric random walk on Z d started from the origin X 0 = 0, one has P(X 2n = 0) ≍ n −d/2 and so formulation (a) shows that strong transience occurs for d ≥ 5, while for d ∈ {3, 4} there is transience but not strong transience.
In the present paper we investigate strong transience, and the finer property E[τ β | τ < ∞] < ∞, β > 0, in the context of Markov chains on Z + := {0, 1, 2, . ..} in the Lamperti regime [24][25][26], which is the critical regime for recurrence and transience in the case of processes whose increments have constant-order variance.As well as being of interest in their own right, Lamperti processes are prototypical near-critical stochastic processes that have arisen in numerous contexts: see e.g.[9] and [30] for surveys.Strong transience is of interest both as a quantification of transience, and also as it bears on the geometry of transient trajectories, as has been explored for random walks in the context of the range (that is, how many sites the process has visited by a given time; see [3,20,21] and [19, §6.2]) and points of self-intersection and cut points (points that separate the past and future trajectories into disjoint sets; see [12,13]).

Strong transience
Before describing in detail Lamperti processes and our main result (in Section 3 below), we start by reviewing the concept of strong transience, which is the main focus of this paper, in the context of discrete Markov chains, as in the following assumption.
(M) Suppose that X = (X n ; n ∈ Z + ) is an irreducible, time-homogeneous Markov chain on a countable state space S.
The transition probabilities of X specify a collection of laws P x , x ∈ S, for the chain started from a fixed initial state x ∈ S, i.e., P x (X 0 = x) = 1.We write E x for the expectation corresponding to P x .The initial state will play no part in our results, but will be used in formulating our notation.One can also realise X on an externally specified probability space in which X 0 is random; we write simply P for the probability measure in such cases, and we also use P to stand in for P x when x is unimportant.We use the notation N := {1, 2, 3, . ..}.
For y ∈ S, define the first hitting time τ y to y and the last exit time λ y from y as τ y := inf{n ∈ N : X n = y}; λ y := sup{n ∈ N : where the conventions inf ∅ := +∞ and sup ∅ := 0 are in force.Of course, the distributions of τ y and λ y depend not only on y but also on the distribution of X 0 ; in particular, we will introduce notation for their moments under P x , when X 0 = x is fixed.Define for β > 0 and x, y ∈ S the quantities T β (x, y) := E x τ β y ½{τ y < ∞} ; in the definition of T β the convention is that τ β y ½{τ y < ∞} := 0 if τ y = ∞.Also write T β (x) := T β (x, x), L β (x) := L β (x, x), U β (x) := U β (x, x); in words, T β (x) is, for the Markov chain started from X 0 = x, the βth moment of the random variable that is equal to the first return time to x when finite, and takes value 0 if the Markov chain never returns to x. Definition 2.1.Suppose that (M) holds.Let β > 0. We say that X is β-strong transient if T β (x) < ∞ for some x ∈ S. If β = 1, we simply say X is strong transient.
The following result gives equivalent formulations of β-strong transience.The intuition is that, in the transient case, the number of returns to a given state is geometrically distributed, so roughly equivalent tail behaviour is exhibited by conditional first-return times, last exit times, and even the sum of times at which the process visits that site.Lemma 2.2.Suppose that (M) holds, and that X is transient.Let β > 0. Then the following are equivalent.
For β = 1, we can trace the definition of strong transience back to Port [33] and Jain and Orey [20] (observing that Lemma 2.2 gives equivalence of definitions).Much of Lemma 2.2 is contained in Theorem 3.2 of [32] (which deals with β = 1) and Theorem 4 of [41]: we give a short proof in Appendix A. Note that T 1 (x, y) = n∈Z + P x (n < τ y < ∞) and, by Fubini's theorem, which (for β = 1) is the form of U 1 (x, y) used in [20,33].

Lamperti processes
We now turn to the class of models that we will study.We suppose that (M) holds for state space S = Z + .It follows from irreducibility that X is non-confined, i.e., lim sup n→∞ X n = ∞, a.s.; see e.g.[30,Cor. 2.1.10].If X is recurrent, then lim inf n→∞ X n = 0, a.s., while if X is transient, then lim n→∞ X n = ∞, a.s.[30,Lem. 3.6.5].It is the transient case that interests us here.
The assumptions that we impose all pertain to the transition probabilities of the Markov chain; the distribution of X 0 plays no role.We will assume the following.
(B) Suppose that there exists a constant B ∈ N such that, for all n ∈ Z + , (I) Suppose that there exist ε > 0 and m ∈ N such that, for every i ∈ Z + , max where B is the constant from (B).
Assumption (B) is boundedness of increments, while (I) is a strengthening of the irreducibility condition to incorporate some uniformity.Indeed, irreducibility shows that for every i, j ∈ Z + , P j (X n i,j = i) ≥ ε i,j for some n i,j ∈ N and ε i,j > 0. Hence, setting m i := max j:|j−i|≤B n i,j and ε i := min j:|j−i|≤B ε i,j we have max 1≤n≤m i P j (X n = i) ≥ ε i for all i, j ∈ Z + with |j − i| ≤ B, where m i ∈ N and ε i > 0; assumption (I) demands that one may take m i and ε i to be independent of i.
Under assumption (B), the increment moment function µ k : Z + → R given by is well defined for any k ∈ N (and does not depend on n ∈ Z + ).
From the point of view of the recurrence and transience of X, the most interesting regime is when 2xµ 1 (x) and µ 2 (x) are comparable: investigation of this case goes back to pioneering work of Lamperti [24][25][26].We will assume the following Lamperti-type asymptotic conditions.
(L) Suppose that there exist finite constants c and s 2 such that 2c > s 2 > 0, and Assuming the preceding assumptions in this section, condition (L) ensures transience, via a result of Lamperti [24,Thm. 3.1,p. 320], and this is optimal in the sense that 2c ≤ s 2 implies recurrence [30, Thm.3.5.2,p. 108] (see also [29]).The main result of this paper is the following classification of β-strong transience for Lamperti processes of the type described in the present section.Probably the simplest example to illustrate Theorem 3.1 is that where X is an irreducible nearest-neighbour random walk (or birth-and-death chain) for which , for all x ∈ N with x > |c|.
For this example, µ 1 (x) = c/x and µ 2 (x) = 1, so (L) holds with While the assumption (B) is an obstacle to some applications (see Remark 3.2(e)), Lamperti processes with bounded jumps have found recent applications in survival and dominance of agents trading in financial markets, and in polymer pinning and wetting models from statistical physics (see, respectively, [5] and [1], and references therein).Before making some more detailed comments on Theorem 3.1 (in Remarks 3.2 below), we outline the scheme of proof, around which the rest of this paper is constructed.The first step in the proof of Theorem 3.1 is to define, via an appropriate Doob h-transform, a version of the Markov chain X conditioned to return to 0. This construction is presented in Section 4, and is based on a modulation of the transition probabilities according to the function h(x), which is the probability that the (transient) Markov chain X visits 0 in finite time, started from x ∈ S. Analysis of the behaviour of the conditioned Markov chain requires a rather precise study of the hitting-probability function h.There are two main component results in this direction in the body of the paper, which are of some independent interest, but whose formal statement we defer until later, after the necessary definitions and notation.
• Theorem 5.1 presents an asymptotic estimate for h(x+z)/h(x) for fixed z as x → ∞.
• Theorem 4.3 establishes that the conditioned process is itself a sort of Lamperti process, satisfying a version of (L) with appropriately transformed increment moment parameters, but losing a factor of log x in the error terms.
This loss of control in the error terms for the conditioned process is the reason for the fact that Theorem 3.1 does not cover the boundary case where 2c = (2β + 1)s 2 : see also Remark 3.2(a).The proof of Theorem 3.1, which is presented at the end of Section 4, follows from Theorem 4.3 and estimates on passage-time moments for Lamperti processes from [2,30].Theorem 5.1, whose proof is presented in Section 5 and contains most of the technical work of the paper, yields Theorem 4.3.(A corollary of the ratio asymptotic in Theorem 5.1 is the regular variation of the function h.) Section 6 discusses some additional context and related literature.In particular, Section 6.1 presents some comparisons between Theorem 3.1 and strong transience results for multidimensional, homogeneous random walks; this comparison gives access to some additional intuition for the point of the phase transition exhibited in Theorem 3.1.Section 6.2 describes a connection between Lamperti processes and branching processes with migration, in which context we discuss work of Kosygina & Zerner [23] which complements the present work (see also Remark 3.2(c)).Finally, Appendix A gives the proof of the general Lemma 2.2 on characterization of strong transience for countable Markov chains.Remarks 3.2.(a) As mentioned above, Theorem 3.1 does not cover the boundary case where 2c = (2β + 1)s 2 .Such boundary cases can be delicate (see e.g.[29]).A central step in the proof we present below is a coupling argument, which seems to inherently lead to a gain of an O(log x) factor in the error terms of the increment moments of the conditioned process compared to µ 1 , µ 2 in (L) (cf.Theorem 4.3).There are two cases where we believe that Theorem 3.1 can be strengthened to assert that the boundary case where 2c = (2β + 1)s 2 is not β-strong transient.The first would be to impose a stronger version of (L) in which the log x factor in the denominators in the o( • ) terms is replaced by log 2 x.The second is the case of a nearest-neighbour walk, since there one can avoid the coupling step in the proof below altogether: see Remark 5.3(d).In those two cases, Theorem 4.3 could be strengthened to achieve error terms comparable to (L), and hence one could settle the boundary case where 2c = (2β + 1)s 2 .We do expect that the boundary case being not strong transient is rather generic, but it is not clear to us whether the assumption (L) is sufficient for this in general.
(b) By considering U β (0) = n∈N n β P(X n = 0), Theorem 3.1 would also follow from Lemma 2.2 if one established the validity of the local limit estimate lim However, the only local limit theorem results that we are aware of assume nearestneighbour increments, where X n+1 − X n = ±1, and null recurrence.In that setting, s 2 = 1, and the limit theorem (3. and require a specific form for the transition probabilities, while Alexander [1] uses an excursion approach, which seems to not be applicable in the transient case.We are not aware of (3.3) having been established in any transient example.An heuristic explanation for (3.3) is that P(X n = 0) should be comparable to P(τ 0 = n) and P(λ 0 = n), which Theorem 3.1 suggests are about n −q+o (1) for q = 2c+s 2 2s 2 ; see [4,11] for comparison results for such quantities in the case of some transient random walks with i.i.d.increments.
(c) A study of excited random walk due to Kosygina & Zerner [23] establishes criteria for strong transience for a class of branching processes with migration, which can be cast as Lamperti processes (see Section 6.2 below).The technical approach of [23], like ours, uses a Doob h-transform, and recognizes that conditioning retains the Lampertitype character of the process; on the other hand, the main line of the analysis of [23] goes via a diffusion approximation, quite different to our method.Furthermore, in [23] are obtained some asymptotic results on the harmonic function h that complement our Theorem 5.1 below; see Remark 5.3(a).
(d) A natural continuum comparison with the Lamperti process in Theorem 3.1 is a Bessel process with "dimension" (parameter) δ = (2c + s 2 )/s 2 .When δ > 0, this comparison is formalized at the level of weak convergence by classical work of Lamperti [25]; see also [30, §3.11].The transient case is δ > 2, and the fact that the Bessel process has Gamma marginals establishes a Bessel analogue of (3.3).However, local approximation of Lamperti processes by Bessel processes has only been carried through in the nearest-neighbour case, as far as we are aware: see, e.g.[1].
(e) It would be interesting to relax the assumption (B) of bounded jumps, as this would broaden the range of applications to include, for example, random walks on a half-strip [17] and branching processes with migration (see Section 6.2).We emphasize, however, that the technical challenges in our approach that lead to the "extra log" mentioned in remark (a) are likely to be more demanding with unbounded jumps.In particular, the method of the present paper relies heavily on at least the lower boundedness of jumps for the exponential conditional mixing estimate in Proposition 5.7 below; elsewhere (for example in the Lyapunov function estimates) one should be able to assume boundedness of certain increment moments instead.
(f) There are other ways to quantify transience, but those that capture the fact that the process is diffusive do not show the phase transition that appears in Theorem 3.1.The following remarks apply under the hypotheses of Theorem 3.1 (and, in some cases, additional assumptions).Diffusive weak convergence results and almost-sure growth bounds that show that X n is typically about n 1/2 , building on early work of Lamperti [25], can be found in [30, § §3.9-3.11];iterated-logarithm results are given e.g. in [16].Moreover, for the renewal function H(x) := E ∞ n=0 ½{X n ≤ x}, Theorem 5 of Denisov et al. [8] shows that lim x→∞ x −2 H(x) = 1 2c−s 2 ; see also [10] for some finer results.Another way to quantify transience is via the number of cutpoints [12,27]; it follows from [27, Thm.1.2] that X has infinitely many cutpoints, a.s.
(g) Theorem 3.1 shows that a small correction is required in Lemma 3.10.8 of the work [30] involving two of the authors here.Indeed, that result includes the incorrect assertion that the last exit time λ x has E λ x < ∞ for every transient Lamperti process.The statement of that lemma should be replaced by which is what is required for the proof of Theorem 3.10.1 in [30].The argument in the published proof of Lemma 3.10.8 in [30] can readily be corrected to obtain this.

The conditioned Markov chain
Suppose that (M) holds, and for i, j ∈ S write p i,j := P(X n+1 = j | X n = i) = P i (X 1 = j) for the one-step transition probabilities of X.Thus for all n ∈ N and all i, x 1 , . . ., x n ∈ S, Distinguish an arbitrary state in S by 0 ∈ S, and write τ := inf{n ∈ Z + : X n = 0}.Define the hitting probabilities Note that h(0) = 1 and, in the notation at (2.1), h(i) Define p 0,j := p 0,j for all j ∈ S, and 3) The function h : S → (0, 1] is harmonic for P := (p i,j ) i,j∈S stopped at 0, in the sense that as follows from the Markov property.Thus j∈S p i,j = 1 for all i ∈ S, so the p i,j define a Markov transition law with modified transition probabilities when away from 0. Let P i denote the law on X generated by initial state i and Markov transition probabilities p i,j , i.e., for all n ∈ N and all i, x 1 , . . ., x n ∈ S, In particular, it follows from (4.4), (4.3) and (4.1) that, for all i, x 1 , . . ., x n−1 ∈ S \ {0}, Let E i denote the expectation corresponding to P i .The Markov law given by (4.4) is the Doob h-transform of the law (4.1)corresponding to h given by (4.2); it has the following conditioning interpretation.
Lemma 4.1 allows us to interpret β-strong transience for X under the original measure P i in terms of expected return times under the transformed measure P i , as expressed in the following corollary.
by (4.2), and so we have shown that For the final statement in the lemma, it suffices to prove that, for any n ∈ Z + and any x 1 , . . ., x n ∈ S \ {0}, Here we have that, by the strong Markov property under P i , for x 1 , . . ., x n = 0, by (4.4) and (4.5), and this yields (4.6).Similarly, using the fact that h(0) = 1, giving (4.7).Now we return to the case where X is a Lamperti process on S = Z + , as in Section 3. In view of Corollary 4.2, we will study the β-strong transience of X under P i by studying the conditioned version of X under P i .Since h(i) > 0 for all i ∈ Z + , it follows from (4.3) that p i,j > 0 whenever p i,j > 0, and hence irreducibility under p i,j implies irreducibility under p i,j .Similarly, under condition (B), it is the case that P x (|X 1 − X 0 | ≤ B) = 1 also holds for the conditioned process.For k ∈ N, define the increment moment functions corresponding to the conditioned process by Recall from Corollary 4.2 that β-strong transience of the P process equates to existence of β-moments of return times for the P process.The main part of the proof of Theorem 3.1 will be provided by Theorem 4.3 below, which shows that the P process is itself of Lamperti type, but with transformed parameters (such that it is recurrent, rather than transient), and with weaker control of the error terms compared to (L).
Theorem 4.3.Suppose that (M) holds for S = Z + , and that (B), (I), and (L) hold.Then P := ( P x ) x∈Z + defines an irreducible Markov chain on Z + with uniformly bounded increments, for which ] < ∞ for some (hence every) i ∈ S\{0}.Theorem 3.2.6 of [30] (which specializes to the case of Markov chains on Z + results of [2]) shows that sufficient for

Ratio expansions for return probabilities
The main aim of this section is to prove Theorem 4.3; to do so we study in more detail the return-probability function h defined at (4.2).In order to estimate µ 1 (x) and µ 2 (x) in Theorem 4.3, we need in (4.8) to have estimates for h(x + z)/h(x), at least for |z| ≤ B. Theorem 5.1 below presents the central estimate.As a consequence, we deduce (in Corollary 5.2 below) that the function h is regularly varying and eventually decreasing, which is a result of some independent interest.For c, s 2 the parameters in (L), define the critical exponent Note that, by (L), we have γ c ∈ (0, ∞).
Remarks 5.3.(a) Kosygina & Zerner [23] obtain an asymptotic estimate for h, of a somewhat different kind than that in Theorem 5.1, for a class of branching processes with migration; see Section 6.2 below for the translation to the Lamperti context.The assumptions in [23] require specific structure for the process, and specified distributions of increments, but they do not (and must not) assume bounded jumps.In our notation, Proposition 4.3 of [23] says that lim x→∞ x γc h(x) is a finite positive constant, and hence This statement neither implies, nor is implied by, the ratio limit result (5.2); it does imply regular variation of h.An anonymous referee suggests the terminology that (5.2) be described as a local-at-infinity result, in contrast to the non-local asymptotics (5.3).
(b) Another adjacent result to Corollary 5.2 is Theorem 2.20 of Denisov et al. [9], which provides upper and lower bounds for h(x) in terms of a regularly-varying "near-harmonic" function, under moments conditions weaker than the uniform bound (B).
(c) The slowly-varying function L in Corollary 5.2 cannot be determined by asymptotic assumptions alone: it can change by a constant factor if one modifies a single transition probability near 0, for example.
(d) If (B) is augmented with the additional left continuity (or left skip free) assumption that X n+1 − X n ≥ −1, a.s., then the proof of Theorem 5.1 (and hence Theorem 3.1) simplifies significantly.Indeed, left continuity implies that and in this case a relatively crude optional stopping argument, based on the fact that X γc±ε n , ε > 0, is a sub/supermartingale outside a bounded set (cf. Lemma 5.4 below for a finer result) is enough to show that which, with (5.4), yields (5.2).
Proof of Corollary 5.2.Since h(0) = 1, we can write , where υ ≈ 0.5772 is Euler's constant, it follows that h(x) = x −γc L(x), where which implies that L is slowly varying [7, p. 12].Moreover, from the z = 1 case of (5.2), we have that, for all x sufficiently large, and hence h is eventually decreasing, as claimed.
The rest of this section is devoted to the proof of Theorem 5.1.As described in Remark 5.3(d), this is relatively straightforward if the process is left continuous, because there is no randomness in the entrance distribution when crossing a level to the left.In general, the argument outlined in Remark 5.3(d) does not work as stated, but it must be improved; we show (Proposition 5.7 below) that left-crossing distributions stabilize rapidly, using a coupling argument, and we use a refined Lyapunov function to obtain a sufficient optional stopping estimate (see Lemma 5.11 below).We start by introducing the Lyapunov function that we will use.
Lemma 5.4.Suppose that (M) holds for S = Z + , and that (B) and (L) hold.If ν > 0, there exists On the other hand, if ν < 0, there exists Proof.Lemma 3.4.1 of [30] shows that (noting the change in sign of γ there), for γ ∈ R, as x → ∞, where µ k is as defined at (3.2).By assumption (L), it follows that Taking γ = γ c as defined at (5.1), and using the fact that 2c − (γ c + 1)s 2 = 0, we get which, since γ c s 2 > 0, yields the result.
The next result uses optional stopping to give rough bounds on h(x) for later use.
Lemma 5.5.Suppose that (M) holds for S = Z + , and that (B) and (L) hold.Define h by (4.2).Let ε > 0.Then, for all x sufficiently large, (5.6) Remark 5.6.The bounds in Lemma 5.5 are too weak to give a good estimate of the key ratio in Theorem 5.1; indeed, from Lemma 5.5 one cannot even conclude that lim x→∞ h(x + z)/h(x) exists.Thus the proof of Theorem 5.1 requires additional ideas beyond standard Lyapunov function estimates.
For I ⊆ Z + , define the first hitting time of set I by X via (5.9) By irreducibility, P x (η I < ∞) > 0 for every non-empty I ⊆ Z + and every x ∈ Z + .The next result is a key ingredient in our proof.It shows exponential stability of the entrance distribution to an interval, started from a long way above that interval, conditioned on hitting the interval.The proof uses coupling and relies on the uniform irreducibility assumption (I).Proposition 5.7.Suppose that (M) holds for S = Z + , and that (B), (I), and (L) hold.For a ∈ Z + , let I a := [a, a + B] ∩ Z, where B is the constant in (B).For all a ∈ Z + and all u ∈ I a , θ a (u) := lim x→∞ P x (X η Ia = u | η Ia < ∞) exists.Moreover, there exist constants C < ∞ and b > 0 such that, for all a ∈ Z + and all ℓ > 0, We will apply Proposition 5.7 in the following form.For y ∈ R + let ⌊y⌋ denote the largest integer no greater than y.
Corollary 5.8.Suppose that (M) holds for S = Z + , and that (B), (I), and (L) hold.For A ∈ R + define a(x) := x − ⌊A log x⌋.Let δ > 0. Then there exist A δ , x B ∈ R + such that, for all A ≥ A δ , all x ≥ x B , and all |z| ≤ B, (5.10) Before proving Proposition 5.7 and its corollary, we introduce another conditional Markov chain.For a, i ∈ Z + , define where η Ia is defined at (5.9).Note that g a (i) = 1 for i ≤ a + B. Similarly to h as defined at (4.2), g a : Z + → (0, 1] is harmonic for P .Let q a i,j denote the one-step transition probabilities for X conditioned on η a < ∞, i.e., the Doob transform relative to g a : and q a i,j := p i,j for i ≤ a + B. Write Q a := (q a i,j ) i,j∈S for the corresponding transition matrix, and Q a i for the law corresponding to initial state i ∈ Z + and transition matrix Q a : i.e., for all n ∈ N and all i, x 1 , . . ., x n ∈ Z + , The following result gives an analogue of Lemma 4.1, and shows that the uniform irreducibility hypothesis (I) carries over to Q a i .Lemma 5.9.Suppose that (M) holds and that S = Z + .Let a ∈ Z + .Then Q a i (η Ia < ∞) = 1 for all i ∈ Z + .Moreover, for every i ∈ Z + , under P i , the law of (X 0 , X 1 , . . ., X η Ia ) given η Ia < ∞ is the same as the law of (X 0 , X 1 , . . ., X η Ia ) under Q a i .In addition, if (B) and (I) hold, then there exists ε ′ > 0 such that, for every a, i ∈ Z + , where m ∈ N is as in (I).
Proof.We establish the uniform irreducibility statement for Q a ; the other statements follow in the same way as the corresponding statements in Lemma 4.1.Let m ∈ N and ε > 0 be the constants from (I).Then a consequence of (I) is that, for all i, j ∈ Z + with |j − i| ≤ B, there exists n i,j with n i,j ≤ m such that, by the Markov property, Hence q a i,j ≥ εp i,j for all i, j ∈ Z + with |i − j| ≤ B. It follows that Q a i (X n = j) ≥ ε n P i (X n = j) for all i, j, n ∈ Z + , and all a.Then (5.12) follows from (I), with ε ′ = ε m+1 , with m ∈ N as in (I).
Proof of Proposition 5.7.Fix a ∈ Z + .We construct on a single probability space (an embedding of) two coupled copies of X conditioned to reach interval I a , and then stopped, one with law Q a i and one with law Q a j .Set x 1 := a + B, and let η := inf{n ∈ Z + : X n ≤ x 1 }; by (B), for all i ∈ Z + with i ≥ x 1 , it is the case that P i (η = η Ia ) = Q a i (η = η Ia ) = 1, with η Ia as defined at (5.9).Moreover, Lemma 5.9 says that , and one-step transition probabilities given by x,z for all z; x,z for all z; y,z for all z; all other probabilities being zero.In words, each coordinate of the process stops as soon as it enters [0, x 1 ].Otherwise, if Y and Y ′ coincide, then they jump together according to the transition matrix Q a , while if they do not coincide, then whichever of Y, Y ′ is bigger jumps according to Q a , with the other coordinate remaining fixed. Define Process Y can jump only at times K, process Y ′ can jump only at times K ′ , and both processes can jump (together) only at times K ∩ K ′ .List the elements of K as a (possibly terminating) sequence as It follows that sup It remains to bound the right-hand side of (5.13).The idea is that at each time n for which Y ′ n , Y n are such that |Y ′ n − Y n | ≤ B, the uniform irreducibility bound (5.12) from Lemma 5.9 shows that there is uniformly positive probability of coupling within m steps; if not, one can try again later, and the total number of coupling attempts before reaching I a will grow linearly in i ∧ j, by (B).We give the details.
Take i, j > x 1 + (k + 1)mB for some k ∈ N. If |i − j| > B, then the maximal of the two processes will move, and, since (by Lemma 5.9), we will eventually have Hence we may, without loss of generality, suppose that |i − j| ≤ B. It then follows from (5.12) that for some n ≤ m and some Otherwise, we can apply the same argument at time m, by which time Y n , Y ′ n ≥ x 1 + (k − 1)mB.Given i, j > x 1 , choose the constant k ∈ Z + so that i ∧ j ≥ x 1 + kmB, i.e., k = ⌊ (i∧j)−x 1 mB ⌋.Then iterating the above coupling argument k times, we get where the constants C < ∞ and b > 0 depend on B, m, and ε ′′ , but do not depend on a, i, or j.Then (5.13) yields, for all i, j ≥ a + B, (5.14) In particular, the bound (5.14) shows that for each u ∈ I a , P i (X η = u | η < ∞) is a Cauchy sequence in i, and hence θ a (u) = lim i→∞ P i (X η = u | η < ∞) exists; the exponential convergence rate also follows from (5.14).
Proof of Corollary 5.
for all |z| ≤ B and all x ≥ e 1+2B =: x B .Hence we may choose A ≥ A δ large enough so that (5.10) holds, as claimed.
The next result enables us to express the ratio of return probabilities to hitting probabilities of a relatively nearby interval that is nevertheless far from the origin, and hence more amenable to estimation by Lyapunov functions based on our asymptotic assumptions.
Lemma 5.10.Suppose that (M) holds for S = Z + , and that (B), (I), and (L) hold.Let a(x) = x − ⌊A log x⌋, for A ≥ A γc+3 the constant in Corollary 5.8.Then Before proving Lemma 5.10, we examine the probabilities P x+z (η I a(x) < ∞).Take a(x) = x − ⌊A log x⌋, as in Corollary 5.8.For ν ∈ R, x ∈ R + , and z ∈ R with x ≥ 1 and x + z ≥ 0, define where θ a (u) is as defined in Proposition 5.7, f is as defined at (5.5), and γ c is given by (5.1).Note that, for every γ > 0, for all b : (5.17) The following lemma combines the Lyapunov function ideas of Lemma 5.5 with the stability of the interval entrance distribution from Corollary 5.8 to obtain refined hitting probability bounds.
Proof of Lemma 5.10.First observe that, by (B), whenever by (4.2) and the strong Markov property applied at time η Ia .Combining the δ = γ c + 3 case of (5.10) with (5.18), applied at x + z ∈ Z + and with a = a(x) for A ≥ A γc+3 the constant in Corollary 5.8, we obtain Lemma 5.11, with the observation (5.17), shows that while, by Lemma 5.5, for any ε > 0, for all x sufficiently large, uniformly for |z| ≤ B, and then using (5.20) the claimed result follows.

.21)
Fix A ≥ A γc+3 , where A γc+3 is the constant in Corollary 5.8, and take x A ≥ 1 for which a(x) = x − ⌊A log x⌋ satisfies a(x) ≥ 1 for all x ≥ x A .For ν ∈ R and x ≥ x A , define For a given A ∈ R + and δ > 0, an application of the mean value theorem shows that we can choose ε > 0 and x ′ A ≥ x A such that, for all |ν| ≤ ε, it follows from (5.22) that, for ε > 0 sufficiently small, for all |ν| ≤ ε and all Since R ν as defined at (5.15) satisfies R ν (x, z) = f γc,ν (x + z)/Θ ν (x), we get for all x ≥ x ′ A .Here, for |z| ≤ B, (5.24) Thus from (5.23) and (5.24) we obtain, for all |ν| ≤ ε and all x sufficiently large, Using this in (5.21), we obtain, for any δ > 0, for all x sufficiently large sup Since δ > 0 was arbitrary, the result follows.
Finally, we can complete the proof of Theorem 4.3.

Multidimensional random walks
In this section, we draw attention to a relevant comparison between our main result and what is known about strong transience for multidimensional random walks.Let Z, Z 1 , Z 2 , . . .be a sequence of i.i.d.random variables in Z d , d ∈ N. Let S = (S 0 , S 1 , . ..) be the associated random walk, given by S n := n i=1 Z i , n ∈ Z + .Suppose that S is genuinely d-dimensional, i.e., supp Z is not contained in any (d −1)-dimensional subspace of R d .Denote by φ(u) := E e iu ⊤ Z , u ∈ R d , the characteristic function of Z.
The following result is due to Port (Theorem 4.4 of [32], for the case β = 1; see also [19, p. 144]) and Takeuchi (Theorem 6 of [41]); a Lévy process analogue can be found in [37], while for the case of Brownian motion in R 3 , Spitzer [39] attributes the result to Joffe [22].
For the proof of this result, we recall some terminology about lattice random walks, and a consequence of the multidimensional lattice local limit theorem: for reference see [6,Ch. 5], [40, §7], or [28, §A].If Z generates a genuinely d-dimensional, lattice random walk S, Lemma 21.4 of [6] shows that there is a unique minimal subgroup L of R d such that P(Z ∈ b + L) = 1 for any b ∈ R d with P(Z = b) > 0. The subgroup L is of the form L = HZ d for a non-singular, d-dimensional matrix H, and minimality of L is equivalent to a condition on det H, as well as the condition that |ϕ(u)| = 1 if and only if u ∈ 2π(H ⊤ ) −1 Z d (see e.g.Lemma A.4 of [28]).
The period of S is the maximal ℓ ∈ N such that P(S ℓn = 0) > 0 for all n ∈ N. If ℓ = 1, we say the random walk S is aperiodic.If X generates a walk S of period ℓ ≥ 2, then the increment Z := Z 1 + • • • + Z ℓ generates an aperiodic random walk S, and then P(S ℓn = 0) = P( S n = 0).Moreover, since P( Z = 0) > 0, the increment Z has P( Z ∈ L) = 1 for an associated minimal lattice L, with no shift.Thus it suffices to assume that our original Z is such that b = 0.With the transformation Z = H −1 Z, we can further reduce to the case where H = I (the identity).This is the setting which Spitzer calls "strong aperiodicity" [40, pp. 42, 75].
Suppose that E[ Z 2 ] < ∞ and E Z = 0. Then (since Z is genuinely d-dimensional) there is a positive definite, symmetric, d-dimensional matrix Σ such that E[ZZ ⊤ ] = Σ.Any non-singular linear transformation (such as through the reduction Z = H −1 Z) merely transforms the covariance matrix Σ. Keeping track of the various reductions, the multidimensional lattice local central limit theorem [40, pp. 75-77] implies the following.Lemma 6.3.Suppose that Z has a lattice distribution, generates a genuinely ddimensional random walk S with period ℓ ∈ N, and satisfies E[ Z 2 ] < ∞ and E Z = 0. Then there is a constant ρ > 0 depending only on d and Σ := E[ZZ ⊤ ] such that, for all n ∈ N, P(S ℓn = 0) ≥ ρn −d/2 .Proof of Proposition 6.2.For a genuinely d-dimensional random walk, one has the estimate sup y∈Z d P(S n = y) ≤ Cn −d/2 for some C < ∞ and all n ∈ N: this follows from concentration function estimates [14,Thm. 6.2] or [40, p. 72].Hence sup and then Lemma 2.2 yields β-strong transience for d > 2β + 2. On the other hand, if E[ Z 2 ] < ∞ and E Z = 0, then the multidimensional lattice local central limit theorem (Lemma 6.3) and another appeal to Lemma 2.2 complete the proof.
We indicate an analogy between the above results and our result for Lamperti processes, Theorem 3.
where the error terms are uniform in z ∈ Z d as z → ∞.Thus a non-Markovian version of the Lamperti drift conditions (L) holds, with c = (d−1)/(2d) and s 2 = 1/d.Lamperti's recurrence theory (see [24] and [30,Ch. 3]) extends to this non-Markovian setting, and again shows that we have transience when 2c > s 2 , i.e., d > 2. This argument establishes Pólya's recurrence theorem and extends to any case where E[ Z 2 ] < ∞ and E Z = 0.Moreover, we see that our Theorem 3.1 is consistent with Proposition 6.2, even though our theorem does not apply directly, since X = S is not Markov.The main obstacle to extending the method of the present paper to the multidimensional setting seems to be Proposition 5.7, which relies on the one-dimensional nature of the problem.

Critical branching processes with migration
Let Ξ := (ξ n,i ; n ∈ N, i ∈ N) be an array of independent Z + -valued random variables with distribution identical to that of a random variable ξ, and let ζ, ζ 1 , ζ 2 , . . .be i.i.d.Z-valued random variables independent of the collection Ξ. Define W 0 := w 0 ∈ N (the initial population size) and for n ∈ N set The nonnegative Markov chain (W n ; n ∈ Z + ) is a branching process with migration with offspring distribution ξ and migrant distribution ζ.When ζ > 0 this represents immigration from outside the population, and ζ < 0 represents emigration; the population size W n at generation n cannot go negative.We denote by τ E := inf{n ∈ Z + : W n = 0}; the event τ E < ∞ corresponds to extinction of the population.Note, however, that if P(ζ > 0) > 0 then immigration will eventually restart the process, so 0 is not necessarily an absorbing state.A transformation of W n yields a Lamperti process in the following sense.
Proposition 6.4.Suppose that there exists p > 2 such that E Then the process X n := W + n is a Markov chain on the countable state space X := √ Z + , for which and µ 1 , µ 2 : X → R defined analogously to (4.1) satisfying, for an ε > 0 depending on p, From Proposition 6.4, it is a standard application of Lamperti's recurrence classification [24, Thm.3.1, p. 320] to obtain the following result, which is mostly contained in Theorem 1 of Pakes [31] (see e.g.[30, pp. 114-115] for a similar application in another branching-process context).Corollary 6.5.Under the conditions of Proposition 6.4, it holds that if 2θ > σ 2 , one has P(τ E = ∞) > 0 (i.e., survival is possible), while if 2θ ≤ σ 2 one has P(τ The process X n from Proposition 6.4 is of Lamperti type (the fact that the statespace is X and not Z + is unimportant), but it does not satisfy the bounded jumps hypothesis (B), so our Theorem 3.1 on strong transience does not apply.One might reasonably expect, however, the conclusions of Theorem 3.1 would still apply, at least assuming some sufficiently strong moments conditions on ξ and ζ; some positive evidence in this direction is provided by work of Kosygina & Zerner [23] (see Remark 6.7).We formulate the following problem to address the general case.Problem 6.6.Under the conditions of Proposition 6.4, find effective conditions which ensure that if 2θ > (β + 1)σ 2 it holds that Remark 6.7.The example where P(ξ = k) = 2 −1−k , k ∈ Z + (a shifted geometric distribution) has E ξ = 1 and Var ξ = 2 = σ 2 , and so in this case the result proposed in Problem 6.6 would say that For the shifted-geometric example (in fact, a somewhat more general class of examples, allowing some partial non-independence), this putative result is indeed true, as has been established by Kosygina & Zerner [23].