KENDALL’S IDENTITY FOR THE FIRST CROSSING TIME REVISITED

We give a new relatively compact proof of the famous identity for the distribution of the (cid:12)rst hitting time of a linear boundary by a skip-free process with stationary independent increments. The proof uses martingale identities and change of measure. Let f X t g t (cid:21) 0 ; X 0 = 0 ; be a L(cid:19)evy process which is skip-free in the positive direction. That is, X t has no positive jumps and its increments are stationary independent. For an x > 0 set (cid:28) ( x ) := inf f t > 0 : X t (cid:21) x g with (cid:28) ( x ) = 1 on the event f sup t (cid:21) 0 X t < x g . The following result is well known.

For any y, s > 0, If X t has a density p X (t, x) at x, then τ (x) also has a density p τ (t, x) at t and Relation (2) was first observed in a special case in [7]. Later the theorem was proved in [6], [9], [11] (under additional conditions) and [2]. Moreover, it was shown in [8] that (1) is a necessary condition for the process {X t } to be skip-free.
A discrete time (and, of course, space) analog of (2) is closely related to the classical ballot problem and was first given (in a special case) as early as in 1711 by A. de Moivre. For historical references and comments (and some generalizations) see e.g. [1] and Section 21 in [10]. Some interesting extensions to the multidimensional case were given in [4]. Known proofs of the identities are either analytical (exploiting Laplace transforms in time and their multiplicative structure) or are based on a limiting procedure and a combinatorial argument (given, in particular in [5] and [10]) or factorization identities [3]. We present a new, rather short and elegant (from our point of view) proof of the above result which uses (a) martingale techniques to find the Laplace transform of the crossing time and then (b) change of measure to find the desired representation for the distribution of the time in terms of the distribution of X t . To avoid a trivial situation, we assume that P (X t > 0) > 0 for t > 0.
Proof. (i) Since there are no positive jumps in the process {X t }, the m.g.f. ϕ(λ) := E e λX1 < ∞ for any λ ≥ 0 and is analytic on the positive half-line, and by independence of increments, is a martingale. Denote by P λ the probability corresponding to the respective Cramér transform of the distribution of {X t }, so that and the process {X t } still remains a Lévy process under P λ .
(ii) The last inequality, together with the obvious fact that M t < e λx /ϕ t (λ) on the event {τ (x) > t}, implies that, as t → ∞, M t → 0 on {τ (x) = ∞} (so that we can formally set M τ (x) = 0 on this event) and also E (M t ; τ (x) > t) → 0. These relations ensure that the optional stopping theorem holds: (this can be shown e.g. by applying the theorem to the bounded stopping time τ (x) ∧ t and then letting t → ∞). It is also clear that µ = µ(λ) is an increasing function mapping (λ 0 , ∞) onto (0, ∞) and hence has an inverse function λ = λ(µ), µ ∈ (0, ∞), with from (4). Now (5) is equivalent to E e −µτ (x) = e −λx , and differentiating this relation w.r.t. µ yields (iii) Next denote by T A = ∞ 0 1 A (X t ) dt the time spent by our process in the set A (i.e. the sojourn time of A). We will make use of the following fact: for any 0 < a < ∞ and λ > λ 0 , To prove it, note that due to (4) and Wald's identity one has for any a > 0 Further, one can easily see that where T since ϕ(λ) > 1. Now this and the RHS of (10) imply that E λ T (a,b] < ∞ for any finite interval (a, b] [the last fact actually following from the well-known known recurrence-transience dichotomy for Lévy processes as well]. Together with the obvious observation that, for any 0 < a < b < c < ∞, (a,c] this means that a corresponding equality holds for the (finite) E λ -expectations of these variables, and hence we have for some constant γ < ∞. But from (9) and (10) one gets and letting here a → ∞ we immediately see that this constant γ must be equal to 1/m λ which completes the proof of (8).
(iv) Note that equality (8) can be re-written as Since the above holds for any a > 0, the expression in the brackets is (for any λ > λ 0 ) nothing else but a multiple of the Lebesgue measure: Now dividing the LHS and RHS of (7) by x and integrating them w.r.t. dx over (y, ∞) we get, using the above formula, that has the density P (X t > y), or, equivalently, that L(dt) has the density P (X t > y)/t. And this completes the proof since the desired identity (1) just represents that fact in terms of the respective distribution functions.