Martingale approach to subexponential asymptotics for random walks

Consider the random walk $S_n=\xi_1+...+\xi_n$ with independent and identically distributed increments and negative mean $\mathbf E\xi=-m<0$. Let $M=\sup_{0\le i} S_i$ be the supremum of the random walk. In this note we present derivation of asymptotics for $\mathbf P(M>x), x\to\infty$ for long-tailed distributions. This derivation is based on the martingale arguments and does not require any prior knowledge of the theory of long-tailed distributions. In addition the same approach allows to obtain asymptotics for $\mathbf P(M_\tau>x)$, where $M_\tau=\max_{0\le i<\tau}S_i$ and $\tau=\min\{n\ge 1: S_n\le 0 \}$.

It follows from the assumption Eξ < 0 that the total maximum M := sup n≥0 S n is finite almost surely. The asymptotic behaviour of P(M > x) has been considered by many authors. The first results are due to Cramer and Lundberg: if there exists h 0 > 0 such that Ee h0ξ = 1 and Eξe h0ξ < ∞ then for some c 0 ∈ (0, 1) and, furthermore, The proof of these statements is based on the following observation: The assumption Ee h0ξ = 1 implies that the sequence e h0Sn is a martingale. Applying the Doob inequality we obtain immediately (2). The same martingale property allows one to make an exponential change of measure, which is used in the proof of (1). If the distribution of ξ is long-tailed, i.e., Ee hξ = ∞ for all h > 0, then one can investigate P(M > x) under some additional regularity restrictions on the tail function F (x) := 1 − F (x). One of the most popular regularity assumption is the so-called subexponentiality of the distribution tails.
The following result is known in the literature as Veraverbecke's theorem: Let F I be defined by the tail F I (x) := min 1, We next turn to the maximum of the positive excursion of the random walk. Let and If the Cramer-Lundberg condition holds then one can derive the asymptotics for P(M τ > x) from that for the total maximum M . This way has been suggested first by Iglehart [10]. Namely, it follows from the Markov property that Thus, Noting that (1) yields = e h0y for every y < 0, and applying the dominated convergence, we obtain As a result we get It turns out that Iglehart's approach can not be applied to heavy-tailed random walks without further restrictions on the distribution of ξ. Here one has to assume that F is strong subexponential. This class of distribution functions was introduced by Klüppelberg [11].
where a + = ∞ 0 F (y)dy ∈ (0, ∞). Denisov [3] adopted Iglehart's reduction from M τ to M to the class of strong subexponential distributions: If F ∈ S * then The asymptotics (6) were found first by Asmussen [1] for F ∈ S * and by Heath, Resnick and Samorodnitsky [9] for regularly varying F . An extension of this result to the general stopping time can be found in Foss and Zachary [8], and in Foss, Palmowsky and Zachary [7]. These extensions rely on (6).
The main purpose of the present note is to give alternative proofs of (3) and (6) using martingale techniques.
In order to state our main result we introduce some notation. For any y > 0 let µ y := min{n ≥ 0 : S n > y}.
The latter stopping time is naturally connected with the supremum since Define also Theorem 3. Assume that F is long-tailed. For any ε > 0 there exists R > 0 such that the stopped sequence Assume in addition that F ∈ S * . For any ε > 0 there exists R > 0 such that the stopped sequence Having constructed super-and submartingale we can obtain subexponential asymptotics for M and M τ by applying the optional stopping theorem.
Corollary 4. For any long-tailed distribution function F with negative mean, Assume in addition that F ∈ S * . Then, To the best of our knowledge, all existing in the literature proofs of the Veraverbecke theorem are based on representations via geometric sums. More precisely, . In order to obtain (3) from that geometric sum one uses the following two properties of subexponential distributions: A recent elegant proof based on (a) and (b) can be found in [13]. Our proof does not use any property of F besides (5). Unfortunately, our method does not allow us to derive (3) for the whole class of subexponential distributions. The condition F I ∈ S and F I ∈ S * are close but do not coincide, see Section 6 in [4]. But we can apply the same construction to M τ and, as it is known in the literature, the strong subexponentiality is optimal for asymptotics (6).
It is worth mentioning that, in contrast to all previous proofs, our approach to (12) is direct, i.e., it does not use any knowledge on the asymptotic behaviour of M .
One of the important advantages of the martingale approach is the possibility to obtain non-asymptotic inequalities for P(M > x) and P(M τ > x). For example, it follows from (9) that for every ε > 0 there exists R > 0 such that (see the proof of Corollary 4) Using a supermartingale property of G a−ε we obtain the following upper bound Of course, in order to apply these inequalities, one has to know how to compute R and R ′ for given values of ε. And we believe that one can do it rather easy for certain subclasses of S * , e.g., for regularly varying or Weibull tails. Foss, Korshunov and Zachary [6] have shown that the inequality , x > 0 holds without any restriction on the distribution function F , see Theorem 5.1 in [6]. This bound is better than (13). It's proof is based on the fact, that the distribution of M is the stationary distribution of the Lindley recursion W n+1 = (W n + ξ n+1 ) + . This property of M can be written as follows: Let ξ ′ a copy of ξ, which is independent of M . Then L(M ) = L((M + ξ ′ )). This can be seen as a martingale property: Define π(x) := P(M > x). Then the sequence π(x − S n∧µx ) is a martingale. Using (10), one gets for all x > R ′ the inequality And an upper estimate for the difference in the nominator is easy to get: Applying the Wald identity, we obtain A lower bound is not as obvious. Here we can conclude from (9) that Thus one needs an appropriate estimate for the difference in the nominator. Martingale approach has been used also by Kugler and Wachtel [12] in deriving upper bounds for P(M > x) and P(M τz > x), where τ z := min{k : S n ≤ −z} under the assumption that some power moments of ξ are finite. Their strategy is completely different: They truncate the summands ξ i in order to construct an exponential supermartingale for the random walk with truncated increments.

Proofs.
2.1. Proof of Theorem 3. Fix ε > 0. To prove the submartingale property we need to show that for all y ≤ x − R.
Put, for brevity, t := x − y ≥ R. By the definition (7), where r c := min{x ≥ 0 : F s (x) ≤ c}. Integrating the first integral by parts, we obtain Integrating the second integral by parts, we obtain Combining the above inequalities, we get Further, Now, taking R 1 sufficiently large, we can ensure that Furthermore, we can choose R 2 so large that As a result, for t > max{R 1 , This proves (9).
To prove the supermartingale property it sufficient to show that for all y ≤ x − R. Using (18) with r c = 0, we obtain According to the definition of S * there exists R 1 such that for all t ≥ R 1 . Furthermore, since F is long-tailed, we have This immediately implies (10) with R = max{R 1 , R 2 }.

2.2.
Proof of Corollary 4. Fix ε > 0 and pick R such that is a submartingale. Then, Hence, .
Letting x to infinity we obtain, Since ε > 0 is arbitrary the lower bound in (11) holds.
To prove the corresponding upper bound fix ε > 0 and pick R such that Y n = G a−ε (x − S n∧µx−R ) is a supermartingale. Then, Let r > 0 be a number which we pick later. Then, where we use the strong Markov property. Now pick sufficiently large r such that P(M > r) ≤ F s (R)/(a − ε). Then, Combining this with (20), we get Letting x to infinity we obtain, Since ε > 0 is arbitrary the upper bound holds.