Equidistant sampling for the maximum of a Brownian motion with drift on a finite horizon

A Brownian motion observed at equidistant sampling points renders a random walk with normally distributed increments. For the difference between the expected maximum of the Brownian mo- tion and its sampled version, an expansion is derived with coefficients in terms of the drift, the Riemann zeta function and the normal distribution function.


Introduction
Let {B(t)} t≥0 denote a Brownian motion with drift coefficient µ and variance parameter σ 2 , so that with {W (t)} t≥0 a Wiener process (standard Brownian motion). Without loss of generality, we set B(0) = 0, σ = 1 and consider the Brownian motion on the interval [0, 1]. When we sample the Brownian motion at time points n N , n = 0, 1, . . . N , the resulting process is a random walk with normally distributed increments (Gaussian random walk). The fact that Brownian motion evolves in continuous space and time leads to great simplifications in determining its properties. In contrast, the Gaussian random walk, moving only at equidistant points in time, is an object much harder to study. Although it is obvious that, for N → ∞, the behavior of the Gaussian random walk can be characterized by the continuous time diffusion equation, there are many effects to take into account for finite N . This paper deals with the expected maximum of the Gaussian random walk and, in particular, its deviation from the expected maximum of the underlying Brownian motion. This relatively simple characteristic already turns out to have an intriguing description. In Section 2 we derive an expansion with coefficients in terms of the Riemann zeta function and (the derivatives of) the normal distribution function. Some historical remarks follow, and the proof is presented in Section 3.

Main result and discussion
By Spitzer's identity (see [19,14] where B + (t) = max{0, B(t)}. The monotone convergence theorem, in combination with a Riemann sum approximation of the right-hand side of (2), gives (see [1]) The mean sampling error, as a function of the number of sampling points is then given by Since B(t) is normally distributed with mean µt and variance t one can compute where Φ(x) = 1 2π x −∞ e − 1 2 u 2 du. Substituting (5) into (4) yields where We are then in the position to present our main result.

Theorem 1.
The difference in expected maximum between {B(t)} 0≤t≤1 and its associated Gaussian random walk obtained by sampling {B(t)} 0≤t≤1 at N equidistant points, for |µ/ N | < 2 π, is given by with O uniform in µ, ζ the Riemann zeta function, p some positive integer, B n the Bernoulli numbers, and g (k) defined as the kth derivative of g in (7).
The expression in (8) φ(x) = e −x 2 /2 / 2π and c j = 0 for j = 6, 10, 14, . . .. The first term c 1 has been identified by Asmussen, Glynn & Pitman [1], Thm. 2 on p. 884, and Calvin [5], Thm. 1 on p. 611, although Calvin does not express c 1 in terms of the Riemann zeta function. The second term c 2 was derived by Broadie, Glasserman & Kou [3], Lemma 3 on p. 77, using extended versions of the Euler-Maclaurin summation formula presented in [1]. To the best of the authors' knowledge, all higher terms appear in the present paper for the first time.
The distribution of the maximum of Brownian motion with drift on a finite interval is known to be (see Shreve [18], p. 297) and integration thus yields A combination of (11) and (8) leads to a full characterization of the expected maximum of the Gaussian random walk. Note that the mean sampling error for the Brownian motion defined in (1) on [0, T ], sampled at N equidistant points, is given by σ T · ∆ N (µ T /σ). When the drift µ is negative, results can be obtained for the expected all-time maximum. That is, for the special case µ < 0, σ = 1, for −2 π < µ < 0. Note that (12) follows from Theorem 1. The result, however, was first derived by Pollaczek [16] in 1931 (see also [11]). Apparently unaware of this fact, Chernoff [7] obtained the first term −ζ(1/2)/ 2π, Siegmund [17], Problem 10.2 on p. 227, obtained the second term 1/4 and Chang & Peres [6], p. 801, obtained the third term −ζ(−1/2)/2 2π. The complete result was rediscovered by the authors in [9], and more results for the Gaussian random walk were presented in [9,10], including series representations for all cumulants of the all-time maximum.

Proof of Theorem 1
We shall treat separately the cases µ < 0, µ > 0 and µ = 0. The proof for µ < 0 in Subsection 3.1 largely builds upon Euler-Maclaurin summation and the result in Section 4 of [9] on the expected value of the all-time maximum of the Gaussian random walk. The result for µ > 0 in Subsection 3.2 then follows almost immediately due to convenient symmetry properties of Φ. Finally, in Subsection 3.3, the issue of uniformity in µ is addressed and the result for µ = 0 is established in two ways: First by taking the limit µ ↑ 0 and subsequently by a direct derivation that uses Spitzer's identity (4) for µ = 0 and an expression for the Hurwitz zeta function.
where B n (t) denotes the nth Bernoulli polynomial, B n = B n (0) denotes the nth Bernoulli number, and Since f (l) (x) = g (l) (x/N )/N l+1 , we thus obtain where From the definition of g in (7) it is seen that g (2p) is smooth and rapidly decaying, hence R p, we even have R p,N = O(1/N 2p+2 ). Therefore, from (20), Combining (16) and (23) completes the proof, aside from the uniformity issue, for the case that µ = −γ < 0.

The positive-drift case
The analysis so far was for the case with negative drift µ = −γ with γ > 0. The results can be transferred to the case that µ > 0 as follows. Note first from Φ(− Therefore, by (6) since the term µ vanishes from the right-hand side of (6). Then use the result already proved with −µ < 0 instead of µ. This requires replacing g(t) from (7) by and µ by −µ everywhere in (8). The term 2g(t) − µ then becomes which is in the form 2g(t) − µ with g from (7). Next we compute Finally, the infinite series with the ζ-function involves µ quadratically. Thus writing down (8) with −µ < 0 instead of µ turns the right-hand side into the same form with g given by (7). This completes the proof of Theorem 1 for µ = 0.

The zero-drift case
We shall first establish the uniformity in µ < 0 of the error term O in (8), for which we need that can be bounded uniformly in µ < 0 as O(N −2p ). Write ν = 1 2 µ 2 , and observe from (27) and Newton's formula that for k = 1, 2, . . .