On the supremum of products of symmetric stable processes

We study the asymptotics, for small and large values, of the supremum of a product of symmetric stable processes. We show in particular that the persistence exponent remains the same as for only one process, up to some logarithmic terms.


Introduction
For n ∈ N, let (Z (i) , 1 ≤ i ≤ n) be independent symmetric α-stable Lévy processes with α ∈ (0, 2]. In this short note, we are interested in the study of the random variable Except when n = 1, in which case the double Laplace transform of S 1 is classically given by fluctuation theory (see for instance Bertoin [4,p.174]), it does not seem evident to compute explicitly the law of S n , and we shall rather study its asymptotics P (S n ≥ x) as x → +∞ and P (S n ≤ ε) as ε → 0.
Most of the paper is devoted to the computation of the limit as ε → 0, which is known as a persistence problem, see the surveys [1,5]. By scaling, this amounts to the study of the first entrance time of the n-dimensional stable process (Z (i) , 1 ≤ i ≤ n) into the "hyperbolic" domain H n = {(z 1 , . . . , z n ) ∈ R n , n i=1 z i ≥ 1} : There are several papers in the literature dealing with entrance and exit times of symmetric stable processes, mainly for three families of domains : cones and wedges (Bañuelos and Bogdan [2], Méndez-Hernández [9]), parabolic domains (Bañuelos and Bogdan [3]) and unbounded convex domains (Méndez-Hernández [8]). Here, since the domain H n is non-connected, not much is known regarding R n and we shall tackle the problem directly by working with S n .
(1) Large deviations : (2) Persistence probability : In the non-Gaussian stable case, the situation is different.
(1) Large deviations : (2) Persistence probability : The presence of extra logarithmic terms in the persistence probability of Brownian motion is due to some additive phenomenons. Indeed, recall the estimates (see Bertoin [4, p.219]) : for some positive constants k and c. When α < 2, the second asymptotics is the leading one, while for α = 2, they are of the same order, and some compensations appear, see Lemma 4. In fact, the heuristic below leads us to believe that the right asymptotics in the Brownian case should be ε |ln(ε)| n−1 .
The main part of the proof deals with the computation of an upper bound for the persistence probabilities. A simple approach would be to try to bound the quantity S n by n i=1 Z (i) θ 1 where θ 1 is the value at which one of the Lévy processes, say Z (n) , reach its maximum on [0, 1]. This yields of course two main difficulties.
i) First, the product of the other processes n−1 i=1 Z (i) θ 1 might not be positive. This can be however easily circumvented thanks to Slepian's inequality, since the processes are symmetric.
ii) The second difficulty is less obvious and is due to the arcsine law for stable processes. There is a high probability that θ 1 will be close to 0, hence, although Z (n) θ 1 will be large, the remaining product n−1 i=1 Z (i) θ 1 will also be close to zero, thus not providing us with a good upper bound. The general idea of the proof will be to decompose the path of the processes (Z (i) ) at some last passage times and then use a time-reversal argument, so as to find a value not to close to the origin at which Z (n) is large enough.
The outline of the paper is as follows : the large deviation results are proved in Section 2, the persistence probabilities in Section 3, and finally Section 4 provides the proof of an intermediary lemma.

Large deviations
The proof of the large deviation results relies on the symmetry of the processes (Z (i) ), and on the fact that the asymptotics of both random variables |Z 1 | and sup 0≤u≤1 Z u are similar. Indeed, on the one hand, the lower bound is easily given by : On the other hand, still by symmetry, It remains thus to compute the involved quantities in both cases.
→ In the Brownian case, since sup 0≤u≤1 W u (law) = |W 1 |, we deduce that the asymptotics of S n is given by that of n i=1 |W (i) 1 |. Its Mellin transform reads, for ν > −1 : The converse mapping theorem, see Janson [7, Theorem 6.1], yields : for some positive constant κ. The result then follows by integration, using the asymptotics of the incomplete Gamma function.
→ Next, when α ∈ (0, 2), it is known from Bertoin [4, p.221] that there exists k > 0 such that Point 1. of Theorem 2 is then consequence of the following lemma (see for instance Lemma 2 in Profeta-Simon [10]): Lemma 4. Let X and Y be two independent positive random variables satisfying the asymptotics : where n, p ∈ N and a, b, ν, µ are positive constants such that 0 < ν ≤ µ. Then there exists c > 0 such that :

Persistence probabilities
We now turn our attention to the persistence estimates and start with some notations. Let X be a symmetric stable process. We denote by P x the probability measure of X when started from x ∈ R, with the usual convention that P = P 0 . Let T 0 be the first time that X takes a negative value : We recall from Bertoin [4, p.219] that since X is symmetric, there exists c > 0 such that Finally, let us introduce the last change of sign of X before time t > 0 : This random time will be the key to the computation of the persistence probabilities.
Remark 5. In the following, when applying the Markov property, X will always denote an independent copy of X. Besides, we shall use the notations c and κ to denote positive constants that may change from line to line.
We first show that the asymptotics of the distribution of g 1 is similar to that of the arcsine law.
Lemma 6. There exists a positive constant c such that Proof. We first have, using the symmetry of X and applying the Markov property with r ∈ (0, 1) : By scaling, this is further equal to Recall now from Doney-Savov [6] that under P 1 , the random variable T 0 admits a continuous density h satisfying h(z) ∼ z→+∞ κ z −3/2 for some constant κ > 0. Therefore, differentiating, we deduce that which is the announced result.
3.1. Lower bound for the persistence probabilities. Observe first that by scaling where the last equality follows from the fact that, by definition of the (g (i) ), the product n i=1 Z (i) remains negative after time 1. We now apply the Markov property at time 1 : From (3), there exists κ > 0 such that for δ > 0 small enough Plugging this inequality in (4), we deduce that which gives the lower bound.

3.2.
Upper bound for the persistence probabilities. Since all the processes (Z (i) ) have the same law, we first have : To simplify the notation, we shall remove the superscript (n) and denote and ξ t = sup This yields, with the usual convention that empty products equal 1, where the last equality follows by symmetry. By scaling, we further obtain We set X (x,t,y) for the α-stable bridge of length t starting from x and ending at y. Notice that when X = W is a Brownian motion, then g 1 coincides with the last zero of W before time 1, so that W g 1 = 0 a.s. and it is well-known that the process is a standard Brownian bridge, independent from g 1 , see Bertoin [4, p.230]. We shall extend this result to the stable case in the following lemma, whose proof is postponed at the end of the paper.
Lemma 7. We set by convention X 0 − = X 0 . Conditionally on the event is independent from g 1 and has the same law as the stable bridge X Let us denote by ρ(da, dr) the law of the pair g −1/α 1 X g − 1 , g 1 . Since the (Z (i) ) are quasi-left continuous and independent from X, we deduce from Lemma 7 that where the equality follows from the time-reversal property of stable bridges. We shall now decompose the right-hand side of this inequality according as {a ≤ 1} or {a > 1}.
3.2.1. The case {a ≤ 1}. We start with the term giving the main contribution. Let us denote by p t the density of the random variable X t , and recall that it is even, and decreasing on (0, +∞). Using the absolute continuity formula of the stable bridge, we get : We now study the integrand in (6). Recall that X admits the representation (B τu , u ≥ 0) where B is a standard Brownian motion and τ a stable subordinator with index α 2 independent from B. Let us consider the conditional expectation : where η and (ω (i) , 1 ≤ i ≤ n − 1) are some fixed càdlàg paths. We apply Slepian's lemma with the Gaussian processes This yields, using the tower property of conditional expectations : Observe next that, by taking u = 0, this quantity is null as soon as a n−1 i=1 Z (i) 1 r n α ≥ ε. Therefore, denoting θ 1 2 = Argmax 0≤u≤1/2 X u , we may replace the supremum by its value at θ 1 2 to get the bound P a We further decompose this integral according to the sign of n−1 i=1 Z (7) is smaller than ≤ 0, the situation is slightly more complex. We have 1 |r n α ≤ 2ε, 1 ≥ ξ 1 r =: I n (r, 2ε) + J n (r, 2ε) where we have used in the last line the inequality : 1 {x≤ε(a+b)} ≤ 1 {x≤2aε} + 1 {x≤2bε} . Going back to (6), we are thus led to study the asymptotics of 1 0 (I n (r, ε) + J n (r, ε))P(g 1 ∈ dr).
We start with I n (r, ε) which will give the main contribution. From Lemma 6, we may choose δ ∈ (0, 1) small enough such that for some constant c > 0. On the one hand, when r ≥ δ, we obtain since θ 1 2 ≤ 1 2 : On the other hand, when r ≤ δ, we deduce from the Markov property at time 1 that : Using the identity Z is a copy of Z (i) 1 , independent from the processes (Z (i) ) and ( Z (i) ), we then obtain the bound where we have used the classic inequality |x + y| α ≤ 2(|x| α + |y| α ) since α ∈ (0, 2]. We further assume that δ is taken small enough so that, from Lemma 4 and the asymptotics (3), we have for some positive constant κ, and where a ∨ b = max(a, b). We shall now proceed by iteration.
i) If |Z (n−1) 1−θ 1 2 | ≥ 1, then, we may remove |Z (n−1) 1−θ 1 2 | from the second product in (10), and deduce from (11) that I n (r, ε) is smaller than | by 1 in the first product in (10), and deduce, still from (11), that I n (r, ε) is smaller than Iterating the procedure, we obtain that I n (r, ε) may be bounded by a sum of 2 n−1 terms : and it remains to study the asymptotics of the integrands. From Lemma 4 and (1), we deduce that → when α ∈ (0, 2) all the terms have the same contribution : → while, for α = 2, they depend on the cardinal of ∆ : Plugging these expressions in (12) finally gives the announced upper bound.
The study of the asymptotics of J n (r, ε) follows the same pattern of proof, except that we do not need to introduce the random variables (Y (i) 1 ). Indeed, when r ≥ δ, we get the same asymptotics bound while for r ≤ δ we obtain, applying the Markov property : Using the decompositions |Z (i) 1 | ≤ 1 (resp. |Z (i) 1 | ≥ 1) and following the same steps as for I n (r, ε), we deduce that δ 0 J n (r, ε)P(g 1 ∈ dr) (13) and, as ε → 0, all the terms on the right-hand side have the same asymptotics : ε α 2 |ln(ε)|.

3.2.2.
The case {a ≥ 1}. Starting back from (5), we first bound the supremum by its value at u = 0 : The study of this last expression will be similar to that of J n (r, ε), replacing X θ 1 2 by 1. Indeed, on the one hand, we first deduce from Lemma 4, taking δ small enough as before, that : On the other hand, for r ≤ δ, we deduce, as for (13), that When ε → 0, all the integrals on the right-hand side are finite, hence we obtain the asymptotics ε α 2 which is negligible.

Proof of Lemma 7
Proof. This lemma being classic for Brownian motion, we assume that α ∈ (0, 2). Let 0 < s ≤ t ≤ 1 and take F a positive functional. Let us denote by f (y; z, r) the probability density function of (X T 0 , T 0 ) when X 0 = y. By symmetry and time reversal, we first have