On the Small Deviation Problem for Some Iterated Processes

We derive general results on the small deviation behavior for some classes of iterated processes. This allows us, in particular, to calculate the rate of the small deviations for $n$-iterated Brownian motions and, more generally, for the iteration of $n$ fractional Brownian motions. We also give a new and correct proof of some results in E. Nane, Laws of the iterated logarithm for $\alpha$-time Brownian motion, Electron. J. Probab. 11 (2006), no. 18, 434--459.


Introduction
This article is concerned with the small deviation problem for iterated processes. We consider two independent, real-valued stochastic processes X and Y (precise assumptions are given below), define the iterated process by (X • Y )(t) := X(Y (t)), t ∈ [0, 1], and investigate the small deviation function − log P sup when ε → 0. The goal of this article is • to provide general results concerning the order of (1) -given that we know the small deviation probabilities for the processes X and Y , respectively, and that Y has a continuous modification; • to study some nice examples of processes to which this technique can be applied, among them the iteration of n (fractional) Brownian motions; and theless we will show below how the technique can be adjusted by using the stationarity of increments instead of the independence. In this section, we assume that • X is an H-self-similar, two-sided process.
• Y has a continuous modification.
If we know the weak asymptotic order of the small deviation probability of the processes X and Y , respectively, we can determine that of the process X • Y .
The implication also holds if ≈ is replaced by or , respectively. For translating lower bounds (i.e. in the relations above), the assumption that Y is continuous can be dropped.
Remark 2. Note that the resulting exponent is always less than θ. Therefore, the small deviation probability of X • Y is always larger than the one of X. This is of course not true in general when comparing X • Y to Y .
Remark 3. In fact, for the proof it is sufficient to know that − log P sup for all T > 0, instead of the self-similarity property and the given small deviations of X.
Furthermore, provided we know the strong order of the small deviation functions, we can prove a result for the strong asymptotic order for that of the iterated process.
imply − log P sup The implication also holds if ∼ is replaced by or , respectively. For translating lower bounds (i.e. in the relations above), the assumption that Y is continuous can be dropped.
It is easy to check that this theorem recovers the results from [9], where X and Y are Brownian motions, and [10], where X is a Brownian motion and Y = |Y ′ | with Y ′ being a Brownian motion.
for all T > 0, instead of the self-similarity property and the given small deviations of X.

Remark 6.
A careful reader would wonder why the self-similarity index of X and the small deviation index of X should be related by θ := 1/H. In fact, this relation is rather typical for the supremum norm. We refer to [14] for the explanation of this fact in the context of small deviations of general norms. Also see [23].
One may argue that, typically, not the probability in (3) is given but rather the small deviation probability. The following lemma translates from the small deviation probability into (3) (and backwards) if we know that a process satisfies the Anderson property.
Recall that the Anderson property for a random vector Y taking values in a linear space E means that for any e ∈ E and any measurable symmetric convex set A ⊆ E, cf. [2]. It is known that any centered Gaussian vector has this property. Another example is given by symmetric α-stable vectors since their distributions can be represented as mixtures of Gaussian ones.
if and only if Note that the applicability of Lemma 7 depends on the use of the Anderson property. We now state that if X satisfies the Anderson property so does X • Y . This makes it possible to use Theorem 4 iteratively.
Lemma 8. Let T be some non-empty index set and let (X(u)) u∈R and (Y (t)) t∈T be independent stochastic processes, where X satisfies the Anderson property. Then the process (X(Y (t))) t∈T satisfies the Anderson property.
This shows that, in particular, iterated Brownian motion, the iteration of two (or more general n) fractional Brownian motions, α-time Brownian motion (defined below), and many other non-Gaussian processes satisfy the Anderson property.

Proofs of the general results
Before we prove Theorem 1, we recall a result that translates the small deviation probability into a corresponding result for the Laplace transform.
Lemma 9. Let Y (0) = 0 almost surely, p > 0, and τ > 0. Then The relation also holds if ≈ is replaced by ( in the assertion) or ( in the assertion), respectively.
Proof: This follows simply from the fact that and the de Bruijn Tauberian Theorem (Theorem 4.12.9 in [4]).
Now we can prove Theorem 1. Proof of Theorem 1: By assumption, for some constants C 1 , C ′ 1 , C 2 , C ′ 2 > 0 and all ε > 0, Note that, since Y is continuous, Therefore, by independence of X and Y and by independence of X for positive and negative arguments, we have P sup Now we use the H-self-similarity of X to see that the last expression equals By (5), we have P sup Analogously one can argue for the second term in (7), which yields that the whole expression in (7) is less than By Lemma 9, the logarithmic order of this Laplace transform, when ε → 0, is ε −θ/(1+Hθ/τ ) , which proves the upper bound in the assertion. The lower bound is established in exactly the same way using the lower bound in (5). Note that this argument fails when Y is not Now let us prove the strong asymptotics result. Proof of Theorem 4: Let δ > 0. By assumption, for all 0 < ε < ε 0 = ε 0 (δ), This implies that there are constants C 1 , C 2 > 0 (depending on ε 0 ) such that for all ε > 0, By repeating the previous proof with (5) replaced by (8), we arrive at P sup By using the assumption Hθ = 1, we clearly have Next, by the de Bruijn Tauberian Theorem (Theorem 4.12.9 in [4]), the strong asymptotic logarithmic order of this Laplace transform, when ε → 0, is Letting δ → 0 proves the upper bound in the assertion. The lower bound follows in exactly the same way using the lower bound in (8). As in the previous theorem, the proof fails when Y is not continuous, because we only have Y Proof of Lemma 7: Clearly, which implies the inequality in one direction. On the other hand, let N := inf t Y (t) and M := sup t Y (t). Fix h > 1 and 0 < ε < 1. Assume sup Then since Let m be the point in {kε h , k ∈ Z} closest to Q. There are only ≤ 2(ε+ε h )/ε h possible values for m. Additionally, we have where we used the Anderson property in the fourth step.
Taking logarithms, multiplying with −(2ε) τ ℓ(2ε) −1 , taking limits, and using that ℓ is a slowly varying function implies that which finishes the proof.
Proof of Lemma 8: It is sufficient to check (4) for cylindric sets. Fix d ≥ 1 and let B be a symmetric convex set in R d . Let t 1 , . . . , t d ∈ T and fix any function e : T → R 1 . Define a cylinder A := {a : T → R : (a(t 1 ), . . . , a(t d )) ∈ B} and the corresponding random cylinders Then we have

Iterated Brownian motions
As a first example let us consider n-iterated Brownian motions: where the X i are independent (two-sided) Brownian motions. This process is 2 −n -selfsimilar. The small deviation problem can be solved by using (n − 1)-times Theorem 4 (and Lemmas 7 and 8): where d 1 := 1 and An explicit calculation yields:

Iterated two-sided fractional Brownian motions
More generally, one can consider n-iterated fractional Brownian motions, given by (10), where this time X 1 , . . . , X n are independent (two-sided) fractional Brownian motions with Hurst parameters H 1 , . . . , H n , respectively. The process X (n) is H 1 · . . . · H n -self-similar. Its small deviation order is given by and c n is defined iteratively by and c(H) is the small deviation constant of fractional Brownian motion with Hurst parameter H. Even for n = 2, i.e. fractional Brownian motions X 1 and X 2 with Hurst parameters H 1 , H 2 , respectively, this leads to the new result that the small deviation order is

The 'true' iterated fractional Brownian motion
Note that in the last subsection we obtained the small deviation order for 'iterated fractional Brownian motion' X • Y , where X was a two-sided fractional Brownian motion, i.e. a process consisting of two independent branches for positive and negative arguments, and Y was another fractional Brownian motion (independent of the two branches of X). We shall now calculate the small deviation order for the 'true' iterated fractional Brownian motion, namely, using Y as above but X being a centered Gaussian process on R with covariance The general result is as follows.
Theorem 11. Let X be a fractional Brownian motion with Hurst index H as given in (13) and Y be a continuous process independent of X satisfying Then − log P sup

where c(H) is the small deviation constant of fractional Brownian motion.
This theorem can be applied to many processes Y . We recall that (14) can be obtained e.g. via Lemma 7 from the small deviation order. In particular, if Y is also a fractional Brownian motion we get the following result for the 'true' iterated Brownian motion.
Corollary 12. Let X be a fractional Brownian motion with Hurst index H = H 2 as given in (13) and Y be a (continuous modification of a) fractional Brownian motion with Hurst index H 1 (independent of X). Then − log P sup Recall that we obtain the same logarithmic small deviation order as for a two-sided fBM. Moreover, the iteration of n 'true' fractional Brownian motions provides the same asymptotics as obtained in (12) for the two-sided fBM.
Note that, in spite of the identity of the assertions, Theorem 11 does not follow from Theorem 4, since X is not two-sided. We will show now how the stationarity of increments of the 'true' fBM replaces the independence property of the two-sided process. For the proof of Theorem 11, we need the following lemma.
Lemma 13. For any δ ∈ (0, 1) there exists K δ > 0 such that for all N ≤ 0 ≤ M, for all ε > 0, and for any centered Gaussian process X(t), t ∈ R, with stationary increments it is true that Proof: To see the upper bound observe that the stationarity of increments and weak correlation inequality (cf. [12]) yield For the lower bound, using the same arguments in inverse order we get Proof of Theorem 11: Let δ > 0 and define as before N := inf t∈[0,1] Y (t) and M := sup t∈[0,1] Y (t). Then Lemma 13 yields that, for some constant K δ > 0, Note that f, g ≥ 0 are non-increasing functions. Thus, by the FKG inequality (cf. e.g. [17], p. 65), the last term is bounded from below by The first term can be handled as in the proof of Theorem 4, the resulting order is On the other hand, one easily proves that admits a lower bound of order ε, as ε → 0. Therefore, Letting δ → 0 finishes the proof. The upper bound can be proved along the same lines or by using Hölder inequality instead of FKG inequality.

Motivation
Let X be a Brownian motion and Y be a symmetric α-stable Lévy process.
In [21], the small deviation problem for X • Y , called α-time Brownian motion there, is studied and further applied to the LIL of Chung type and results for the local time for that processes. However, the method of proof in [21] is essentially the same as for our Theorem 4. Note that this proof is wrong in the case of α-time Brownian motion, since the inner process Y is not continuous, which is a main ingredient of the proof. In fact, it is used that Y ([0, 1]) = [N, M], with as above N := inf t Y (t) and M := sup t Y (t), which is not true for this Y .
However, trivially Y ([0, 1]) [N, M], and thus the proof in [21] does give a lower bound for the small deviation probability. The purpose of this section is to give a correct proof of the upper bound.
More precisely, we show the following version of Theorem 2.3 in [21]. This result implies weaker versions of the results in [21].
Theorem 14. Let X be a two-sided Brownian motion and Y be a strictly α-stable Lévy process (independent of X) that is not a subordinator. Then We note that this is not as strong as the assertion of Theorem 2.3 in [21]: the existence of the small deviation constant and its value are not assured. This should be subject to further investigation.
Note furthermore that we prove the result for general strictly stable Lévy processes Y , symmetry is not a feature that would be required here. The only property that is used is self-similarity.
For the sake of completeness let us mention that in the case that Y is an α-stable subordinator (0 < α < 1), the above result is wrong. Namely, in that case X • Y is in fact a symmetric (2α)-stable Lévy process itself, so that we then get that − log P sup We shall even prove the following more general version of Theorem 14.
Theorem 15. Let X be a two-sided strictly β-stable Lévy process (0 < β ≤ 2) and Y be a strictly α-stable Lévy process (0 < α ≤ 2, independent of X) that is not a subordinator. Then Remark 16. The result is also true if we take an H-fractional Brownian motion or H-Riemann-Liouville process as X, cf. [19]. Then of course, β = 1/H. The proof of Theorem 15 is given in several steps. First note that the lower bound follows from our Theorem 1 and Prop. 3, Section VIII, in [3] (the result actually dates back to [25], [20], and [5]). The upper bound follows from Proposition 18 below, as explained there.

Handling the outer Brownian motion
In order to prove Theorem 15, we shall proceed as follows. In a first step, we show that the small deviation problem of processes that are subordinated to Brownian motion (or more generally, to a strictly β-stable Lévy process) are closely connected to the (random) entropy numbers of the range of the inner process (i.e. K = Y ([0, 1])). This technique was previously used in [18] and [19] for fractional Brownian motion. Then we estimate the entropy numbers of the range of the inner process, in our case a strictly α-stable Lévy process (the subordinator case was studied in [18]). This requires completely new arguments.
These quantities are usually called covering numbers of the set K and characterize its metric entropy. We can get rid of the randomness of the outer process X in the following way.
Proposition 17. Let X be a (two-sided) strictly β-stable Lévy process, 0 < β ≤ 2. Then there is a constant c 0 > 0 such that, for all compact sets K ⊂ R and for all ε > 0, Proof: This simple result can be proved in essentially the same way as Proposition 3.1 in [19], where it was shown for X being fractional Brownian motion. We therefore just indicate the proof: choose c 0 so large that sup t≥c 0 P (|X(t)| ≤ 2) ≤ e −1 . For N = N(K, c 0 ε β ) find an increasing sequence t 1 , ..., t N in K such that t i+1 − t i ≥ c 0 ε β for all i = 1, ..., N − 1. Then by independence of increments and strict stability of X we have Recall that in order to prove Theorem 15 we want to get an upper bound for P sup where we let K = Y ([0, 1]). Since for any R > 0 we have the upper bound in Theorem 15 follows immediately from the next result.
Proposition 18. Let Y be a strictly α-stable Lévy process and set K = Y ([0, 1]). Then there exist small c and δ depending on the law of Y such that for all ε > 0 and k = ε −α/(1+α) .
The proof of Proposition 18 is given in the next subsection. In Section 5.4, an alternative proof is given, which is much shorter but involves local times and thus only works for α > 1.

Remark 19.
Actually, the investigation of the small deviation probabilities for covering numbers such as P (N(Y ([0, 1]), ε) < k) is an interesting problem in its own right and we hope to handle it elsewhere extensively. Here, we just notice that the order of the estimate (15) is sharp and that it is a particular case of a more general fact that can be obtained similarly: which is valid for 1 ≤ k ≤ ε −1 , 1 < α < 2, and for 1 ≤ k ≤ δε − α 1+α , 0 < α < 1. More efforts are needed to understand the remaining cases, e.g. ε − α 1+α ≪ k ≪ ε −α , 0 < α < 1.

Proof of Proposition 18
We will now prove inequality (15). For this purpose, let us introduce the notation For a given t ≥ 0, N [0,t] (ε) counts how many intervals are needed in order to cover the range of the process when only looking at the path until time t. Let T = ε −α . By scaling we have Let δ < 1/2. Notice that the number of choices for J satisfying |J| ≥ ⌊(1 − δ)k⌋ can be expressed as 2 k P (B k ≥ ⌊(1 − δ)k⌋) where B k is a sum of k Bernoulli random variables attaining the values 0 and 1 with equal probabilities. By the classical Chernoff bound for the large deviations of B k we see that this number is smaller than: while δ 1 satisfies δ 1 → 0, as δ → 0. For a while, we fix an index set J. We enlarge the events from (16) as follows: Let, as usual, F t denote the filtration generated by the process Y up to time t. We have, by stationarity and independence of increments, By a standard conditioning argument, we find

Alternative proof via local times
Here, we give an alternative proof for Proposition 18 when α > 1. In this case, the strictly α-stable process Y possesses a continuous local time L: B L(x) dx = 1 0 1l B (Y (t)) dt, for all Borel sets B.
6 Some further remarks on extensions 1) Slowly varying terms in the asymptotics of Y . One may add a slowly varying term in the asymptotics in (2) or (3) and still use the same method of proof to the get a similar result. Also, one can consider what happens if Y has polynomial small deviation behaviour or super-exponential behaviour and obtain a similar result using the same method of proof. However, it is not immediately clear what happens if X has a small deviation order below or above the exponential scale.
2) Chung's Law of the iterated logarithm. It is straightforward to derive from the results from Theorem 1 and Theorem 4 the lower bounds in the respective Chung law if, additionally, Y satisfies a certain self-similarity. Notice that the derivation of the upper bounds usually needs some independence arguments that are specific for the considered processes. We preferred not to go into these details in order to maintain a certain level of generality.
3) Small deviations in L p -norm. Note that we investigate the small deviation problem for X • Y w.r.t. the supremum norm in this article. It would be interesting to study the small deviations in other norms, e.g. for 1 0 |X(Y (t))| p dt with given p > 0, and investigate their dependence on the small deviations of X, Y and, if applicable, the local time of Y . Nothing seems to be known about this problem even for iterated Brownian motion.