On the one-sided exit problem for fractional Brownian motion

We consider the one-sided exit problem for fractional Brownian motion (FBM), which is equivalent to the question of the distribution of the lower tail of the maximum of FBM on the unit interval. We improve the bounds given by Molchan (1999) and shed some light on the relation to the quantity $I$ studied there.


Introduction
This paper is concerned with the so-called one-sided exit problem for stochastic processes: If (X(t)) t≥0 is a real-valued stochastic process, we want to find the asymptotic rate of the following probability: This problem arises in a number of contexts, the most important of which is the relation to Burgers equation with random initial data (see e.g. [2,8]). Further applications concern pursuit problems, relations to random polynomials, and polymer models. We refer to [4] for more background information and links to further literature. In the context of Gaussian process, there seem to be very few results concerning the asymptotic rate of (1). The precise rate is known only for Brownian motion and integrated Brownian motion and very few very particular Gaussian processes.
The aim of this paper is to study the rate in (1) for fractional Brownian motion. Fractional Brownian motion (FBM) X is a centered Gaussian process with covariance where 0 < H < 1 is the so-called Hurst parameter. It is well-known that FBM is self-similar with index H and has stationary increments. The question of the exit probability (1) for FBM has been investigated by Sinaȋ [10] and Molchan ([5,6,7]). The most precise result concerning the asymptotics in (1) for fractional Brownian motion, [5], states that for some positive constant k and T large enough.
In the physics literature, this result is used in the sense of ≈ T −(1−H) , disregarding the loss factors e ±k √ log T . We stress that already proving (2) is highly non-trivial and that presently there is no approach to obtain the precise order of this probability. We mentioned that, beyond the classical results for such particular processes as Brownian motion or integrated Brownian motion, there is no theory to obtain even the polynomial term. Due to this lack of theory, even for simple-looking estimates rather involved calculations are needed, see e.g. (11) below.
In this paper, we give the following improvement of (2).

Theorem 1.
There is a constant c > 0 such that, for large enough T , we have Before giving the proofs of the lower and upper bound in Sections 2 and 3, respectively, we will give some comments.
Molchan [5] related the problem of finding the asymptotics of F to the quantity ( and he is even able to determine the strong asymptotic rate of I. However, when passing over from I to F , the slowly varying terms e ±k √ log T appear. This is essentially due to a change of measure argument: if g is a function in the reproducing kernel Hilbert space of X then the asymptotic rates for the exit problems of X + g and X, respectively, differ at most by e ±k √ log T , cf. [1], Proposition 3.1.
A main goal of this paper is to shed light on the relation between I and F . Heuristically, it is clear that those paths of X that remain below 1 until T will escape to −∞ rather rapidly and thus give a major contribution to I. Vice versa, those paths that do not remain below 1 until T will tend to be near or above zero for a positive fraction of time and thus do not give much contribution to I.
Our proofs will make an effort to understand this relation -beyond a heuristic level. The proof of the upper bound in Theorem 1 is based on seeing I(T ) as an exponential integral. The proof of the lower bound in Theorem 1 selects some paths in the expectation in (3) that give a relevant contribution.
The constant c appearing in the theorem can be specified. For the lower bound one can choose any c > 1/(2H), for the upper bound any c > 2/H −1. However, we do not conjecture optimality of either of the constants. In fact, our proofs make it plausible that even F (T ) ≈ T −(1−H) .
Due to the self-similarity of fractional Brownian motion, our main result immediately translates into a result for the lower tail of the maximum of fractional Brownian motion.

Corollary 2.
There is a constant c > 0 such that, for small enough ε, we have

Lower bound
Before proving the lower bound in Theorem 1, we explain the main line of thought. It shows that the quantity I is indeed a natural object in the study of one-sided exit probabilities (1), even beyond FBM.
The self-similarity of X implies that The path of the process X is Hölder continuous with Hölder exponent γ < H. In particular, it is not Hölder continuous with exponent H. However, suppose for a moment that |X(t) − X(s)| ∼ S|t − s| H for |t − s| → 0. Then the above term behaves asymptotically as Now, via Tauberian theorems, the behaviour of E e −T H X * 1 as T → ∞ is related to the one-sided exit problem P [X * 1 ≤ ε] as ε → 0. However, by the self-similarity of X, we have which brings us back to our original problem.
Of course, fractional Brownian motion does not satisfy |X(t) − X(s)| ∼ S|t − s| H , so that the above calculations are just heuristics. However, the idea can be turned into a formally correct proof of the lower bound for F .
Proof of the lower bound in Theorem 1.
Let H/2 < γ < H. Fix a such that a > 2/H > 1/γ and γ < H − 1/a. Due to the stationarity of increments, it is clear that fractional Brownian motion satisfies A close analysis of the constant C(a) shows that C(a) ≤ ca 1/2 , as a → ∞. Therefore, we have where s := 1/2 (for readability), c > 0, and aH − 1 > 0. By the well-known Kolmogorov theorem, this implies that X has Hölder continuous paths of order γ. An extension of Kolmogorov's theorem (see [9], Lemma 2.1) implies even more: namely, an estimate for the modulus of Hölder continuity. Concretely, from (4) we can infer that, for any 0 < ε ≤ 1, Let us now mimic the heuristics presented before the proof: For 0 < ε ≤ 1, Thus, For simplicity, we set g(T ) := E e −T H X * 1 . Now we use Hölder's inequality (1/p + 1/q = 1, p, q > 1) in the first term to get Setting p/γ := a and using the estimate for the a-th moment of S (and q ≥ 1) we obtain: We rewrite this as follows: Here k is some constant depending only on H; and this inequality holds for all T > 0, γ < H, a > 2/H, and γ < H − 1/a.
We know from Molchan (Statement 1 in [5]) that I(T ) ≥ cT −(1−H) for T large enough. Then the left-hand side becomes: The remainder of the proof is clear due to Tauberian theorems (see [3], Corollary 8.1.7): implies the asymptotic lower bound for F (T ).
Remark 3. We remark that we have used few properties of fractional Brownian motion. In fact, only (4) and the self-similarity is used, not even Gaussianity.

Upper bound
The main idea of the proof of the lower bound is to restrict the expectation in (3) to a set of paths where the integral can be estimated.
In the proof we have to distinguish the cases of positively (H ≥ 1/2) and negatively (H < 1/2) correlated increments. The latter case is more involved but contains the same main idea.
Proof of the lower bound in Theorem 1 for the case H ≤ 1/2. Let κ > 1 and define Clearly, for sufficiently large T , since (using κ > 1) By Slepian's inequality [11] and noting that EX(t)X(s) ≥ 0, we have The first factor in (8), by the lower bound in Theorem 1 (which was proved in Section 2), can be estimated as follows: The second factor in (8) equals Let H ≥ 1/2. Then the increments of FBM are positively correlated. Therefore, by Slepian's lemma (second step), Putting these estimates together with Molchan's result (Statement 1 in [5]) for I(T ) yields the assertion: In the case H < 1/2, the proof is more involved, even though the main idea is the same. We start with the following purely analytic fact.
Lemma 4. Let ℓ(t) := 2(log log(te e )) λ with λ > 0. Let 0 < α ≤ 1. Then there is an s 0 = s 0 (H) ≥ 1 such that, for all t ≥ s ≥ s 0 , The proof of this elementary lemma is given in the appendix. We continue with an auxiliary lemma. In view of (10), this lemma highlights the difficulties with one-sided exit problems for general processes.
Proof. Let ℓ(t) := 2(log log(te e )) λ with λ := 1/4 and define The idea of the proof is that (Y (t)) t≥1 and X(1) are positively correlated (unlike (X(t) − X(1)) t≥1 and X(1)); but Y (t) is essentially the same as ℓ(t)X(t), at least for large t. Note that, for t ≥ 1, Furthermore, define the function f on [1, ∞) by Then f is increasing since f (t) 2H is (as can be seen immediately by differentiating). In fact, for some constant k > 0, Furthermore, the definition of f in (13) is such that Further, one checks that for some s 0 = s 0 (H) ≥ 1, Indeed, let t ≥ s ≥ 1 and recall that this is equivalent to Note that (16) can be rewritten as This is Lemma 4 with α = 2H < 1. Now we are ready for the main argument. We use Slepian's lemma together with (12) and (15) in (17) and (18), respectively, to get that P sup ≥ P sup The second term is of order (log T ) −o(1) . The first term (by (14)) can be estimated from below by P sup 0≤t≤T k(log log T ) 1/(4H) Now we can prove the lower bound also in the case H < 1/2.
Proof of the lower bound in Theorem 1, case H < 1/2. Let s 0 ≥ 1 be the constant from Lemma 5. As in (7)- (9), we obtain The first factor on the right-hand side equals: Using Lemma 5, this shows Putting these estimates together with Molchan's result (Statement 1 in [5]) for I(T ) yields the assertion.
Remark 6. Let us comment on why it seems plausible that the lower bound could hold true without logarithmic loss factors. If we choosẽ instead of φ in the proof, we obtain as in (7): with some constants c, c ′ > 0. In view of [12], it seems plausible that the latter probability has the same asymptotic rate (in the weak sense, but without additional logarithmic factors) as P sup 0≤t≤T X(t) ≤ 1 . However, at the moment there seems to be no way of proving this.
Remark 7. Note that starting from (19) or (7), one immediately obtains Molchan's result (2) by the application of Proposition 3.1 in [1]. This presents a new and simple proof of the upper bound in (2) for all 0 < H < 1. We remark that this argument works for any Gaussian process such that the function g(t) = κ log + t is bounded from above by some function from the RKHS of the process, cf. Proposition 3.1 in [1].
where we used (21) in the first step. This shows (20).
This can be seen as follows. Note that the left-hand side of this inequality behaves asymptotically as c(log t)(log log t). So, for those t with (log t) 2 ≤ (s α − 2)/α the inequality holds.