A note on the Kesten--Grincevi\v{c}ius--Goldie theorem

Consider the perpetuity equation $X \stackrel{\mathcal{D}}{=} A X + B$, where $(A,B)$ and $X$ on the right-hand side are independent. The Kesten--Grincevi\v{c}ius--Goldie theorem states that $P \{ X>x \} \sim c x^{-\kappa}$ if $E A^\kappa = 1$, $E A^\kappa \log_+ A<\infty$, and $E |B|^\kappa<\infty$. We assume that $E |B|^\nu<\infty$ for some $\nu>\kappa$, and consider two cases (i) $E A^\kappa = 1$, $E A^\kappa \log_+ A = \infty$; (ii) $E A^\kappa<1$, $E A^t = \infty$ for all $t>\kappa$. We show that under appropriate additional assumptions on $A$ the asymptotic $P \{ X>x \} \sim c x^{-\kappa} \ell(x) $ holds, where $\ell$ is a nonconstant slowly varying function. We use Goldie's renewal theoretic approach.


Introduction and results
Consider the perpetuity equation where (A, B) and X on the right-hand side are independent. To exclude degenerate cases as usual we assume that P{Ax + B = x} < 1 for any x ∈ R. We also assume that A ≥ 0, A ≡ 1, and that log A conditioned on A = 0 is nonarithmetic. The first results on existence and tail behavior of the solution are due to Kesten [25], who proved that if EA κ = 1, EA κ log + A < ∞, log A conditioned on A = 0 is nonarithmetic, and E|B| κ < ∞ for some κ > 0, (2) where log + x = max{log x, 0}, then the solution of (1) has Pareto-like tail, i.e.
for some c + , c − ≥ 0, c + + c − > 0. (In the following any nonspecified limit relation is meant as x → ∞.) Actually, Kesten proved a similar statement in d dimension. Later Goldie [18] simplified the proof of the same result in the one-dimensional case (for more general equations) using renewal theoretic methods. His method is based on ideas from Grincevičius [21], who partly rediscovered Kesten's results. We refer to the implication (2) ⇒ (3) as the Kesten-Grincevičius-Goldie theorem. That is, under general conditions on A, if P{A > 1} > 0 the tail decreases at least polynomially.
In this direction, we mention a result by Dyszewski [12], who showed that the tail of the solution of (1) can even be slowly varying. On the other hand, Goldie and Grübel [19] showed that the solution has at least exponential tail under the assumption A ≤ 1 a.s. For further results in the thin-tailed case see Hitczenko and Weso lowski [22]. Returning to the heavy-tailed case Grey [20] showed that if EA κ < 1, EA κ+ǫ < ∞, then the tail of X is regularly varying with parameter −κ if and only if the tail of B is. Grey's results are also based on the previous results by Grincevičius [21]. That is, the regular variation of the solution X of (1) is either caused by A alone, or by B alone (under some weak condition on the other variable). Our intention in the present note is to explore more the role of A, i.e. to extend the Kesten-Grincevičius-Goldie theorem. More precisely, we assume that E|B| ν < ∞ for some ν > κ, and we obtain sufficient conditions on A that imply P{X > x} ∼ ℓ(x)x −κ , where ℓ(·) is some nonconstant slowly varying function.
The perpetuity equation (1) has a wide range of applications; we only mention the ARCH and GARCH models in financial time series analysis, see Embrechts, Klüppelberg and Mikosch [13,Section 8.4 Perpetuities and ARCH Processes]. For a complete account on the equation (1) refer to Buraczewski, Damek and Mikosch [7]. Equation (1) is also strongly related to exponential functional of Lévy processes, see Arista and Rivero [3] and Behme and Lindner [5] and the references therein.
The key idea in Goldie's proof is to introduce the new probability measure where I(·) stands for the indicator function. Since EA κ = 1 this is indeed a probability measure. If F is the distribution function (df) of log A under P, then under P κ Under P κ equation (1) can be rewritten as a renewal equation, where the renewal function corresponds to F κ . If E κ log A = EA κ log A ∈ (0, ∞), then a renewal theorem on the line implies the required tail asymptotics. Yet a smoothing transformation and a Tauberian argument is needed, since key renewal theorems apply only for direct Riemann integrable functions. What we assume instead of the finiteness of the mean is that under P κ the variable log A is in the domain of attraction of a stable law with index α ∈ (0, 1], i.e. log A ∈ D(α). Since under P κ the variable log A belongs to D(α) if and only if where ℓ is a slowly varying function. Let U (x) = ∞ n=0 F * n κ (x) denote the renewal function of log A under P κ . Note that U (x) < ∞ for all x ∈ R, since the random walk (S n = log A 1 +. . .+log A n ) n≥1 drifts to infinity under P κ and E κ [(log A) − ] 2 < ∞ by (6); see Theorem 2.1 by Kesten and Maller [26]. Put for the truncated expectation; the first asymptotic follows from (6), the second from (7), and holds only for α = 1. To obtain the asymptotic behavior of the solution of the renewal equation we have to use a key renewal theorem for random variables with infinite mean. The infinite mean analogue of the strong renewal theorem (SRT) is the convergence The first infinite mean SRT was shown by Garsia and Lamperti [17] in 1963 for nonnegative integer valued random variables, which was extended to the nonarithmetic case by Erickson [14,15]. In both cases it was shown that for α ∈ (1/2, 1] (in [17] α < 1) assumption (7) implies the SRT, while for α ≤ 1/2 further assumptions are needed. For α ≤ 1/2 sufficient conditions for (8) were given by Chi [9], Doney [10], Vatutin and Topchii [29]. The necessary and sufficient condition for nonnegative random variables was given independently by Caravenna [8] and Doney [11]. They showed that if for a nonnegative random variable with df H (7) holds with α ≤ 1/2, then (8) holds if and only if We need this result in our case, where the random variable is not necessarily positive, but the left tail is exponential. This follows along the same lines as the proof of the SRT in [8]. For the sake of completeness, in the Appendix we sketch the proof of this result, see Theorem 7. For further results and history about the infinite mean SRT we refer to [8,11] and the references therein. In Lemma 1 below, which is a modification of Erickson's Theorem 3 [14], we prove the corresponding key renewal theorem. Since in the literature ([28, Lemma 3], [29,Theorem 4]) this lemma is stated incorrectly, we give a counterexample in the Appendix. We use the notation x + = max{x, 0}, x − = max{−x, 0}, x ∈ R. Summarizing, our assumptions on A are the following: EA κ = 1, (7) and (9) holds for F κ for some κ > 0 and α ∈ (0, 1], and log A conditioned on A = 0 is nonarithmetic. Theorem 1. Assume (10), E|B| ν < ∞ for some ν > κ, and P{Ax + B = x} < 1 for any x ∈ R.
Then for the tail of the solution of the perpetuity equation (1) we have The assumption P{Ax + B = x} < 1 is only needed to ensure E[(AX for some x 0 ∈ R, then the solution of (1) is X ≡ x 0 , for which (11) trivially holds, with both constants on the right equal to 0.
The conditions of the theorem are stated in terms of the properties of A under the new measure P κ . Simple properties of regularly varying functions imply that if e κx F (x) = α ℓ(x)/(κ x α+1 ) with a slowly varying function ℓ, then (7) holds. See the remark after Theorem 2 [27] by Korshunov. Using the same methods, Goldie obtained tail asymptotics for solutions of more general random equations. The extension of these results to our setup is straightforward. We mention a particular example, because in the proof of the positivity of the constant in Theorems 1 and 3 we need a result on the maximum of a random walk.
Consider the equation where a∨b = max{a, b}, A ≥ 0 and (A, B) and X on the right-hand side are independent. Theorem 5.2 in [18] states that if (2) holds, then there is a unique solution X to (12), and P{X > x} ∼ cx −κ with some c ≥ 0, and c > 0 if and only if P{B > 0} > 0.
Theorem 2. Assume (10), E|B| ν < ∞ for some ν > κ. Then for the tail of the solution of (12) we have lim Equation (12) has an important application in the analysis of the maximum of random walks For general nonnegative B the equation corresponds to the maximum of perturbed random walks; see Iksanov [24].
Assume now that EA κ = θ < 1 for some κ > 0, and EA t = ∞ for any t > κ. Consider the new probability measure that is under the new measure log A has df Note that these are the same definitions as in (4) and (5), the only difference is that now θ < 1. Therefore the same notation should not be confusing. The assumption EA t = ∞ for all t > κ means that F κ is heavy-tailed. Rewriting again (1) under the new measure P κ leads now to a defective renewal equation for the tail of X. To analyze the asymptotic behavior of the resulting equation we use the techniques and results developed by Asmussen, Foss and Korshunov [4]. A slight modification of their setup is necessary, since our df F κ is not concentrated on [0, ∞).
If H ∈ S ∆ for every T > 0, then it is called locally subexponential, H ∈ S loc . The definition of the class S ∆ is given by Asmussen, Foss and Korshunov [4] for distributions on [0, ∞) and by Foss, Korshunov and Zachary [16,Section 4.7] for distributions on R. In order to use a slight extension of Theorem 5 [4] we need the additional natural assumption sup y>x F κ (y + ∆) = O(F κ (x + ∆)) for x large enough. Properties of locally subexponential distributions, in particular its relation to infinitely divisible distributions were investigated by Watanabe and Yamamuro [31,32]. Our assumptions on A are the following: and log A conditioned on A being nonzero is nonarithmetic.
Then for the tail of the solution of the perpetuity equation (1) we have where Note that the condition F κ ∈ L ∆ with ∆ = (0, 1] implies that g(log x) is slowly varying. Indeed, for any λ > 0 g(log(λx)) g(log x) The condition F κ ∈ S loc is much stronger than the corresponding regularly varying condition in Theorem 1. Typical examples satisfying this condition are the Pareto, lognormal and Weibull (with parameter less than 1) distributions, see [4,Section 4]. For example in the Pareto case, i.e. if for large enough Theorem 4. Assume (14), E|B| ν < ∞ for some ν > κ. Then for the tail of the solution of (12) we have lim where g(x) = F κ (x + 1) − F κ (x).
In the special case B ≡ 1 we have the following.
Corollary 1. Let log A 1 , log A 2 , . . . be independent copies of log A, let S n = log A 1 + log A 2 + . . . + log A n denote their partial sum, and M = max{0, S 1 , S 2 , . . .}. Assume (14). Then From Goldie's unified method, it is clear that the asymptotic behavior of the solution X of (1) is closely related to the maximum M of the corresponding random walk S n . The assumption EA κ = 1 together with A ≡ 1 imply that E log A < 0, so the random walk tends to −∞, thus M is P-a.s. finite. Assuming (7) Korshunov [27] showed for α > 1/2 (all he needs is the SRT, so the same holds under (9) for α ∈ (0, 1)) that for some constant c > 0 Thus Theorem 2 contains Korshunov's result [27]. However, note that Korshunov obtained the corresponding liminf result in (13), when the SRT does not hold. With our method the liminf result does not follow due to the smoothing transform (20). The problem is to 'unsmooth' the liminf version of (26). The same difficulty appears in the perpetuity case.
In specific cases the connection between M and X are even more transparent. Let (ξ t ) t≥0 be a nonmonotone Lévy process, which tends to −∞. Consider its exponential functional J = ∞ 0 e ξt dt and its supremum ξ ∞ = sup t≥0 ξ t . Arista and Rivero [3,Theorem 4] showed that P{J > x} is regularly varying with parameter −α if and only if P{e ξ ∞ > x} is regularly varying with the same parameter. Now, if (ξ t ) has finite jump activity and 0 drift, then conditioning on its first jump time one has the perpetuity equation with B being an exponential random variable independent of A, and log A is the jump size. From this we see that in this special case, Theorem 1/3 and Theorem 2/4 with B ≡ 1 are equivalent. Finally, we note that using Alsmeyer's sandwich method [1] it is possible to apply our results to iterated function systems.

Proofs
First, we prove the analogue of Goldie's implicit renewal theorem [18,Theorem 2.3] in both cases. Proof. We follow closely Goldie's proof. Put Using that X and A are independent we obtain the equation Using the definition (4) we see that E κ g(log A) = E(g(log A)A κ ) thus under the new measure equation (18) reads as We have to introduce a smoothing transformation, since the function ψ is not necessarily directly Riemann integrable (dRi), which is needed to apply the key renewal theorem. Introduce the smoothing transform of g asĝ Applying this transform to both sides of (19) we get the renewal equation Iterating (21) we obtain for any n ≥ 1 where log A 1 , log A 2 , . . . are independent copies of log A (under P and P κ ), independent of X, and S n = log A 1 + . . . + log A n . Since S n → −∞ P-a.s. where we also used that E κ g(S n ) = Eg(S n )e κSn . Therefore as n → ∞ from (22) we havê where U (x) = ∞ n=0 F * n κ (x) is the renewal function of F κ . The question is under what conditions of z the key renewal theorem holds. In the following lemma, which is a modification of Erickson's Theorem 3 [14], we give sufficient condition for z to (24) hold. We note that both in Lemma 3 [28] and in Theorem 4 of [29] the authors wrongly claim that (24) holds if z is dRi. A counterexample is given in the Appendix. The same statement under less restrictive condition is shown using stopping time argument in [23,Proposition 6.4.2]. For the sake of completeness we give a proof here. (10) holds. Then (8) implies (24).
Proof. Using the decomposition z = z + − z − we may and do assume that z is nonnegative. Write We first show that where the convergence m(x)/m(x − kh) → 1 follows from the fact that m is regularly varying with index 1 − α. Since m is nondecreasing and k ≤ 0 this also gives us an integrable majorant uniformly in k ≤ 0, i.e. for x large enough sup k<0 m( and similarly for the upper bound. Since z is dRi the statement follows. The convergence I 2 (x) → C α ∞ 0 z(x)dx follows exactly as in the proof of [14,Theorem 3], since in that proof only formula (8) and its consequence U (x) ∼ C α x/(αm(x)) are used.
Finally, we show that I 3 (x) → 0. Indeed, Recall the definition of ψ in (17) (25) and the last integral is finite due to our assumptions. The same calculation shows that It follows from [18, Lemma 9.2] thatψ is dRi, thus from Lemma 1 and (25) we obtain that for the solution of (23) From (26) the statement follows again in the same way as in [18,Lemma 9.3]. Indeed, since m(x) is regularly varying m(log x) is slowly varying, thus from (26) we obtain for any 0 < a < 1 With a = 1 < b and a < 1 = b it follows that As a ↑ 1, b ↓ 1 the convergence follows. Theorem 6. Assume (14), and for some random variable X ∞ 0 |P{X > x} − P{AX > x}|x κ+δ−1 dx < ∞ for some δ > 0, where X and A are independent. Then Proof. Following the same steps as in the proof of Theorem 5 we obtain Theorem 5 [4] gives the following. Recall g from Theorem 3. (14), z is dRi, and z(x) = o(g(x)). Then
As in (25) we haveψ(x) = O(e −δx ) for some δ > 0. Since F κ is subexponentialψ(x) = o(g(x)). That is, the condition of Lemma 2 holds, and we obtain the asymptotiĉ Since g(x) is subexponential, g(log x) is slowly varying, and the proof can be finished in exactly the same way as in Theorem 5.
Now the proofs of Theorems 1, 3, 2, 4 are applications of the corresponding implicit renewal theorem.
Proofs of Theorems 2 and 4. The existence of the unique solution of (12) follows from [18, Propo- Therefore (13) and (16) follows from Theorem 5 and 6, respectively. The form of the limit constant follows similarly. Note that for B ≡ 1, i.e. when log X = M is the maximum of a random walk with negative drift, the constant is strictly positive.
Clearly, the latter condition is weaker for independent A and B.

Strong renewal theorem
In this subsection we show that a slight extension of the strong renewal theorem by Caravenna [8] and Doney [11] holds. The proof follows along the same lines as the proof of Caravenna [8], so we only sketch the proof and emphasize the differences. For convenience, we also use Caravenna's notation.
Theorem 7. Assume that the distribution function H is nonarithmetic, and for some c, κ > 0, α ∈ (0, 1), and for a slowly varying function ℓ we have Then, for the renewal function holds for any h > 0 with m(x) = In the following, X, X 1 , X 2 , . . . are iid random variables with df H, S n = X 1 + . . . + X n is their partial sum. We may choose a strictly increasing differentiable function A, such that H(x) ∼ 1/A(x), and A ′ (x) ∼ αA(x)/x (see p. 15 [6]). The norming constant a n , for which S n /a n converges in distribution to an α-stable law, can be given as a n = a(n) = A −1 (n), where A −1 is the inverse of A. We fix an h > 0, and put I = (−h, 0].
First note that Lemmas 4.1 and 4.2 in [8] hold true for every H in the domain of attraction of an α-stable law; nonnegativity is not needed here. We show that the simple Lemma 5.1 in [8] also remains true in our case. This is needed to treat the small values of z.
Lemma 3. Assume that (28) holds. Then there exist C, C ′ > 0, such that for all n ∈ N and z > 0 P{S n ∈ z + I} ≤ C a n e −C ′ n/A(z) .
Proof. We have to handle the case P{S n ≤ 0}. A truncation argument combined with Chernoff's bounding technique shows that for any fixed K > 0 there is c, c ′ > 0, such that for all n ∈ N Proof of Theorem 7. The necessity is shown in Theorem 1.9 by Caravenna [8] (see also [8,Lemma 3.1] and note that in our case q = 0). It is known (in the general two-sided case) that (29) is equivalent to see (15) in [11], (3.4) in [8], or the Appendix in [9].
As in [8, Theorem 6.1] we claim that for ℓ ≥ 0 Using [8, Lemma 4.1] we see that (32) holds for ℓ ≥ κ α + 1 = ⌊1/α⌋, with ⌊·⌋ standing for the integer part. The proof goes by backward induction, i.e. we assume that (32) holds for ℓ ≥ ℓ + 1, and we show it for ℓ = ℓ. Let B m n (y) denote the set (i.e. a subset of the underlying probability space) on which there are exactly m variables greater than y among X 1 , . . . , X n ; here m ≥ 0, y > 0. Arguing as in the proof of [8, Theorem 6.1], we have to show that for m ∈ {1, 2, . . . , κ α − ℓ} where ξ n,x = a γα n x 1−γα , with γ α = α(1 − (1/α − ⌊1/α⌋))/4. We have P{S n ∈ x + I, B m n (ξ n,x )} ≤ n m P{S n ∈ x + I, min The first term is exactly the same as in the proof of [8, Theorem 6.1]. More importantly, it also can be handled in exactly the same way, since only Lemmas 4.2 and 5.1 in [8] are used, and the manipulations with the measure P{S m ∈ dy} always concern the set where the minimum is positive. Therefore, to finish the proof it is enough to show that This is easy, since where at the last inequality we used (31) and the fact that (30) implies that for any m ≥ 1 lim x→∞ x A(x) P{S m ∈ x + I} = 0; see Remark 3.2 in [8] or Remark 5 in [11]. Clearly, (34) implies (33), and the proof is finished.

A counterexample
Here we give a counterexample to [28,Lemma 3] and [29,Theorem 4], which shows that alone from the direct Riemann integrability of z the key renewal theorem (24) does not follow. Let a n = n −β , with some β > 1, and let d n ↑ ∞ a sequence of integers. Consider the function z that satisfies z(d n ) = a n , z(d n ±1/2) = 0, is linearly interpolated on the intervals [d n −1/2, d n +1/2], and 0 otherwise. Since ∞ n=1 a n < ∞ the function z is directly Riemann integrable. Consider a renewal measure U for which SRT (8)  Choosing d n = n 2 and β such that 2α + β < 2, and recalling that m is regularly varying with index 1 − α, we see that m(a + d n )a n → ∞, so the asymptotic (24) cannot hold. This example also shows that a growth condition on z, something like Erickson's z(x) = O(1/x), is needed.

Local subexponentiality
We claim that Theorem 5 in [4] remains true in our setup. Additionally to the local subexponential property, we assume that sup y≥x H(y + ∆) = O(H(x + ∆)). The main technical tool in [4] is the equivalence in Proposition 2. In our setup it has the following form. The proof is similar to the proof of Proposition 2 in [4], so it is omitted. Assuming the extra growth condition all the results in [4] hold true with the obvious modification of the proof.
Alternatively, we could use Theorem 1.1 by Watanabe and Yamamuro [31], from which the extension of Theorem 5 in [4] follows. In Theorem 1.1 [31] finite exponential moment for the left-tail is assumed, which is satisfied in our setup by (6). Also note that Theorem 1.1 is a much stronger result what we need: it gives an equivalence of certain tails, and we only need implication (ii) ⇒ (iii).