A NEW PROOF OF AN OLD RESULT BY PICKANDS

Let { ξ ( t ) } t ∈ [ 0, h ] be a stationary Gaussian process with covariance function r such that r ( t ) = 1 − C | t | α + o ( | t | α ) as t → 0. We give a new and direct proof of a result originally obtained by Pickands, on the asymptotic behaviour as u → ∞ of the probability P { sup t ∈ [ 0, h ] ξ ( t ) > u } that the process ξ exceeds the level u . As a by-product, we obtain a new expression for Pickands constant H α


Introduction and main result
Let {ξ(t)} t∈[0,h] be a continuous centered stationary Gaussian process with covariance function r(t) = Cov{ξ(s), ξ(s + t)} that satisfies r(t) < 1 for t ∈ (0, h] and where h > 0, α ∈ (0, 2] and C > 0 are constants. Note that (1) includes, for example, cov-ariance functions of the form e −|t| α , where the case when α = 1 corresponds to an Ornstein-Uhlenbeck process. Further, the case when α = 2 in (1) corresponds to mean-square differentiable processes, while processes with 0 < α < 2 are non-differentiable. The tail distribution of sup t∈[0,h] ξ(t) was originally obtained in [6] by means of a long and complicated proof involving so called ε-upcrossings, although his proof was not quite complete. The Pickands result is as follows: Theorem. If (1) holds, then we have Here Φ(u) = ∞ u 1 2π e −x 2 /2 d x is the standard Gaussian tail distribution function, q(u) = u −2/α , and H α is strictly positive and finite constant, the value of which depends on α only and that is given by where {ζ(t)} t>0 is a nonstationary Gaussian process with mean and covariance function; Qualls and Watanabe [8], Leadbetter et al. [4] and others gave clarifications of Pickands' original proof. The proof as it stands today in the literature is long and complicated, but is also important, for example, as a model proof in extreme value theory. In this paper we give a new and direct proof of (2), that is in part inspired by [1,2,9,10]. Given a constant a > 0, a key step in our proof is to find the tail behaviour of the distribution of the discretized maximum max k∈{0,..., h/(aq(u)) } ξ(aq(u)k). Knowing that tail in turn, we can move on to find the tail behaviour of the continuous supremum sup t∈[0,h] ξ(t) featuring in (2). The technical details that we employ to this end are much simpler and more direct than those in the literature. As a by-product, our new proof produces the following new formula for the constant H α (cf. (3)): Theorem 1. If (1) holds, then (2) holds with the constant H α given by where {ζ(t)} t>0 is the Gaussian process with means and covariances given in (4), and η is a unit mean exponentially distributed random variable that is independent of the process ζ.
The exact values of Pickands constants H α are known for α = 1, 2, i.e. H 1 = 1, H 2 = 1/ π but unknown for other values of α, although many efforts to find numerical approximations of H α have been made (see for example [3]). Possibly, our new formula for H α given in Theorem 1 might be utilized in future work to compute that constant numerically.

New proof of (2)
Throughout the proof we use Φ and q as short-hand notation for Φ(u) and q(u), respectively. A key step in the proof is to show that, given constants a, h > 0, we have Lemma 1. If (1) holds, then (5) holds.
Proof. As ξ u (t) is the sum of the two independent Gaussian random variables To establish (a), it is enough to observe that, by (1), for s, t > 0, we have as u → ∞. Further, (b) follows from noting that, by elementary calculations, we have In the second preparatory lemma required to prove Lemma 1, we find the asymptotic behaviour of the probability P{max k∈{0,...,N } ξ(aqk) > u} as u → ∞ and N → ∞: Proof. By the inclusion exclusion formula together with stationarity of ξ and Lemma 2, we have Sending N → ∞ on the right-hand side, the lemma follows by an elementary argument.
We are now prepared to prove Lemma 1: Proof of Lemma 1. In order to find an upper estimate in (5) note that, by Boole's inequality together with stationarity and Lemma 3, we have To find a lower estimate in (5) we make two preparatory observations: Firstly, by stationarity and Lemma 3, we have Secondly, by the elementary inequality 2/ 2 Using the elementary inequality Φ(u + x/u)/Φ(u) ≤ e −x for any u > 0 and x > 0, the right-hand side of the above equation in turn does not exceed [which is less that 1 by (1)]. It follows that → 0 as N → ∞.
By Bonferroni's inequality and stationarity, together with Lemma 3, (8) and (9), we have Putting this lower estimate together with the upper estimate (7), we arrive at (5).
We now have done all preparations required to prove (2). Finally, we prove Theorem 1, which provides a new expression for Pickands constant H α : Proof of Theorem 1. By (6), given any constant a > 0, we have Sending a ↓ 0 in (11) and using Lemmas 1 and 4, we get On the other hand, by Lemma 1 alone, we have Combining the above two estimates upwards and downwards, it follows that the limit H α and the limit on the left-hand side in (2) both exist, and that the identity (2) holds.
To complete the proof, we must show that H α is finite and strictly positive. To that end, note that, by Lemmas 1 and 4, the right-hand side of (11) is finite for a > 0 sufficiently small. Hence the left-hand side (that does not depend on a) is finite, so that H α is finite by (2). On the other hand, by Bonferroni's inequality and stationarity together with (9), we have for N ∈ sufficiently large.
Here the right-hand side is strictly positive, so that (2) shows that H α is strictly positive.

Discussion
In [5] Michna has recently published a proof of the Pickands result. However, Michna's proof is not really a new proof but rather a version of a proof due to Piterbarg [7] of a more general theorem on Gaussian extremes, which has been edited to address only the particular case of the Pickands result as it is also stated by the author himself. The proof of Michna and Piterbarg are very different from ours in that they make crucial use of a technique to relate the original problem to prove the Pickands result to problems for extremes of Gaussian random fields. Also, Michna and Piterbarg do their calculations directly on the continuous suprema which lead to more complicated arguments, while we all the time make use of appropriate discrete approximations to continuous suprema. Virtually all calculations of Michna and Piterbarg as well as their expression for Pickands constant are completely different from ours. The one exception is the use of the elementary Bonferroni's inequality for a lower bound, which we also employ (and as did Pickands), but where again the technical details of that usage are quite different from ours because they work with continuous suprema while we work with discrete approximations. Also, our proof is entirely self-contained and only uses basic graduate probability theory. On the other hand, the proof by Michna and Piterbarg make crucial use of additional sophisticated Gaussian theory such as Borel's inequality, Slepian's lemma and weak convergence on space C. This is again because they chose to work with continuous suprema.