On sequential maxima of exponential sample means, with an application to ruin probability

We obtain the distribution of the maximal average in a sequence of independent identically distributed exponential random variables. Surprisingly enough, it turns out that the inverse distribution admits a simple closed form. An application to ruin probability in a risk-theoretic model is also given.


Introduction
Consider a sequence (X i ) i≥1 of independent identically distributed (i.i.d.) random variables, each having exponential distribution with mean 1. For each i ∈ N + define the sample mean of the first i variables as X i := (X 1 +X 2 +· · ·+X i )/i. The supremum of this sequence Z ∞ := sup{X i : i ∈ N + } is finite because the sequence converges to 1 with probability 1.
In this note we compute the distribution function, F ∞ , of Z ∞ . In fact, what has nice form is the inverse of this distribution function. Our main result is the following.
where W 0 is the principal branch of the Lambert W function, that is, the inverse function of x → xe x , x ≥ 1; see [2]. Indeed, the power series ∞ k=1 k k−1 k! y k has interval of convergence [−1/e, 1/e] and equals −W 0 (−y).
(b) Clearly, the results of the theorem extend immediately to the case that the X i 's are i.i.d. and X 1 = aY + b with a > 0, b ∈ R and Y ∼ Exp(1). However, we were not able to find an explicit formula for the distribution of Z ∞ for any other distribution of the X i 's.
(c) Although it is intuitively clear that F ∞ (x) > 0 for x > 1, it is not entirely obvious how to verify it by direct calculations. However, this fact is evident from Theorem 1.
(d) Formula (1) enables the explicit calculation of the percentiles of F ∞ . Therefore, the result is useful for the following kind of problems: Suppose that a quality control machine calculates subsequent averages, and alarms if some averageX n is greater than c, where c is a predetermined constant such that the probability of false alarm is small, say α. For α ∈ (0, 1), the upper percentage point of F ∞ (that is, the point c α with F ∞ (c α ) = 1 − α) is given by c α = − log α 1−α , and thus the proper value of c is c = c α .
If in the definition of Z ∞ we discard the first n − 1 values ofX i , we obtain the random variable M n := sup{X i : i ≥ n} for which, however, (for n ≥ 2) the distribution function is quite complicated even for the exponential case. For instance, the distribution of M 2 is given by (we omit the details) What we can compute is the asymptotic distribution of √ n(M n − 1) as n → ∞. This distribution is the same for a large class of distributions of the X i 's, as the following theorem shows.
Theorem 2. Assume that the (X i ) i≥1 are i.i.d. with mean 0, variance 1, and there is p > 2 with IE|X 1 | p < ∞. Let M n := sup{X i : i ≥ n} for all n ∈ N + . Then, It is easy to see that under the assumptions of Theorem 2, by the law of the iterated logarithm, it holds lim sup n→∞ √ n √ 2 log log n M n = 1.

Proofs
Proof of Theorem 1. (a) For each n ∈ N + consider the random variable Z n := max X 1 ,X 2 , . . . ,X n and call F n its distribution function. The sequence (Z n ) n≥1 is increasing and . We will compute F n recursively. For n ∈ N + and x ≥ 0 we have where dy k = dy k · · · dy 2 dy 1 and )e −kx and from Lemma 1, below, we get the explicit form This implies the first formula for F ∞ . By the law of large numbers, we get that F ∞ (x) = 0 for all x ∈ (−∞, 1), and thus, the derivative of (b) First we rewrite F ∞ in a more convenient form. The fact that F ∞ (x) = 0 for x ∈ [0, 1) implies the remarkable identity (see Fig. 1 Our aim is to compute the value of the series in the left hand side also for x ≥ 1. The series converges uniformly for which is summable in k. Thus, by continuity, (2) holds also for x = 1. Now we rewrite (2) in the form Since t(x) = x for x ∈ [0, 1], we have Now for any fixed u ∈ (0, 1), the relation Thus, x = − log(1 − u)/u and the proof is complete.
It is consistent with the recursion (7) for V n and (6) to define V 0 (x, t) := 1 so that (6) holds for all n ∈ N + ∪{0}. This agrees with the convention Vol(K 0 (x)) = 1 we made in the proof of Theorem 1(a).
Proof of Theorem 2. By Theorem 2.2.4 in [3] we may assume that we can place (X i ) i≥1 in the same probability space with a standard Brownian motion (W s ) s≥0 , so that, with probability 1, we have |nX n − W n |/n 1/p (log n) 1
Consider now the following risk model. Assume that the aggregate claim at time n is described by S n := X 1 + · · · + X n , where the (X i ) i≥1 are i.i.d. with IEX 1 = 1, the premium rate (per time unit) is c = 1 + θ > 0 (θ is the safety loading of the insurance), and the initial capital is u > −(1 + θ), where negative initial capital is allowed for technical reasons. The risk process is defined by Clearly, the ruin probability ψ(u) := Pr(U n < 0 for some n ∈ N + ) is of fundamental importance. Our explicit formulae are useful in computing the minimum initial capital needed to ensure that ψ(u) is small. The particular problem (for general claims) has been studied in [4], under the name discretetime surplus-process model. It is well-known that ψ(u) = 1 when c ≤ 1, no matter how large u is, because IEX i = 1. Hence, the problem is meaningful only for c > 1, i.e., θ > 0.
Theorem 3. Assume that the i.i.d. individual claims (X i ) i≥1 are exponential random variables with mean 1, fix α ∈ (0, 1) and θ > 0, and set c = 1 + θ. Then, (a) the ruin probability (11) is given by where the function t is given by (4); (b) the minimum initial capital u = u(α, θ) needed to ensure that ψ(u) ≤ α is given by the unique root of the equation Proof. ce −c = 1, and the monotonicity of ψ implies that ψ(u) = 1 for u ≤ −c.
(b) By the formula of part (a), the function ψ is strictly decreasing in the interval (−c, ∞) and maps that interval to (0, 1). Therefore, there is a unique u = u(α, θ) > −c such that ψ(u) = α. Let λ := u/c, which is greater than −1. Then, using (10), we see that We substitute c = 1 + θ, λ = u/(1 + θ), and the above equivalences show that u is the unique solution of The exact values of u in (13) are in perfect agreement with the numerical approximations given in the last line of Table 1 in [4]. Notice that the initial capital u can be negative sometimes, e.g., u(.5, .5) −.3107.