Estimation of the Hurst and the stability indices of a $H$-self-similar stable process

In this paper we estimate both the Hurst and the stable indices of a H-self-similar stable process. More precisely, let $X$ be a $H$-sssi (self-similar stationary increments) symmetric $\alpha$-stable process. The process $X$ is observed at points $\frac{k}{n}$, $k=0,\ldots,n$. Our estimate is based on $\beta$-variations with $-\frac{1}{2}<\beta<0$. We obtain consistent estimators, with rate of convergence, for several classical $H$-sssi $\alpha$-stable processes (fractional Brownian motion, well-balanced linear fractional stable motion, Takenaka's processes, L\'evy motion). Moreover, we obtain asymptotic normality of our estimators for fractional Brownian motion and L\'evy motion. Keywords: H-sssi processes; stable processes; self-similarity parameter estimator; stability parameter estimator.


Introduction
Self-similar processes play an important role in probability because of their connection to limit theorems and they are widely used to model natural phenomena. For instance, persistent phenomena in internet traffic, hydrology, geophysics or financial markets, e.g., [5], [15], [16], are known to be self-similar. Stable processes have attracted growing interest in recent years: data with "heavy tails" have been collected in fields as diverse as economics, telecommunications, hydrology and physics of condensed matter, which suggests using non-Gaussian stable processes as possible models, e.g., [15]. Self-similar α−stable processes have been proposed to model some natural phenomena with heavy tails, see [15] and references therein.
Estimating self-similarity and stability indices of a H−sssi α−stable process is thus important. A lot of works have been reported in the literature for estimating the Hurst index H in cases of fractional Brownian motions, second-order processes, using quadratic variations ( [2], [3], [5], [9]) or wavelet coefficients ( [1], [10]). Log-variations have been used for estimating self-similarity index H, but the rate of convergence is slow ( [5], [8]). Complex variations could be used for estimating H, but not for estimating α, see [7] for more details.
However, there are few studies in the literature regarding the estimation of the stability index or the mutistable function, e.g., [6]. Using p−variations, 0 < p < α, implies an a priori knowledge of stable index α, e.g., [4], [12]. Therefore, in this paper, we use β− variations with −1/2 < β < 0 to estimate both H and α. Since a stable random variable has a density function, then β−variations have expectations and covariances for −1/2 < β < 0. We obtain consistent estimators, with rate of convergence, of H and α and we apply these results for several classical H-sssi α-stable processes. In the cases of the fractional Brownian motion and the SαS−stable Lévy motion, we are able to ascertain the asymptotic distributions of these estimators through a central limit theorem.
The remainder part of this article is organized as follows: in the next section, we present the setting and general assumptions. Section 3 introduces the theorem used to establish the estimators of H and α, along with the main results regarding the convergence of these estimators on four examples (fractional Brownian motions, SαS−stable Lévy motions, well-balanced linear fractional stable motions, Takenaka's processes) and the central limit theorem in the cases of the fractional Brownian motion and the SαS−stable Lévy motion. In Section 4, we provide auxiliary results which are used in the proofs of the main theorems. Section 5 contains variances of limit distributions for central limit theorem in Section 3. Finally, we gather all the proofs of the main results in Section 6.

Setting and general assumptions
Let us recall the definition of a H−sssi process and an α− stable process [15]: A real-valued process X  = stands for equality of finite dimentional distributions. A random variable X is said to have a symmetric α-stable distribution (SαS) if there is parameter α ∈ (0, 2], σ > 0 such that its characteristic function has the following form: When σ = 1, a SαS is said to be standard. Let X be a H-sssi, SαS random process with 0 < α ≤ 2. Let a = (a 0 , a 1 , · · · , a K ), K ∈ N be a finite sequence with K k=0 a k = 0, K k=0 ka k = 0. The increments of X with respect to the sequence a are defined by Let β ∈ R, − 1 2 < β < 0. We set V n (β) = 1 n − K + 1 n−K p=0 |△ p,n X| β (2) and W n (β) = n βH V n (β).
With β ∈ (− 1 2 , 0) fixed, we will make some of the following assumptions: 2. There exists a sequence {b n , n ∈ N}, a constant C such that lim Remark 2.1. It is clear that from the condition (11), we deduce the condition (10).

Examples
In this subsection, we state main results giving the rate of convergence in Section 3 for four examples: fractional Brownian motion, SαS-stable Lévy motion, well-balanced linear fractional stable motion, Takenaka's process, with a central limit theorem in the first two cases.

Central limit theorem for fractional Brownian motion and SαS-stable Lévy motion Definition 3.1. Fractional Brownian motion
Fractional Brownian motion is a centered Gaussian process with covariance given by Fractional Brownian motion is a H−sssi 2-stable process (see, e.g., [5] p. 59).

Theorem 3.2. Let X be a fractional Brownian motion or a SαS Lévy motion. Then as
where Ξ, Σ are defined as in Section 5.

Well-balanced linear fractional stable motion Definition 3.3. Well-balanced linear fractional stable motion
Let M be a SαS random measure, 0 < α ≤ 2, with Lebesgue control measure and consider where 0 < H < 1, H = 1/α. The process X is called the well-balanced linear fractional stable motion. Then X is H-sssi process (Proposition 7.4.2, [15]).
It is clear that lim n→+∞ b n = 0 as n → +∞.
Takenaka's process is defined by Following Theorem 4 in [14], the process X is ν/α−sssi. Let It is clear that lim n→+∞ b n = 0 as n → +∞.
Proofs are given in section 5.

Auxiliary results
In this subsection, we establish the results used to prove the main results in Section 3.

Results on mean and covariance of functions
We will present here some results on mean and covariance of functions of H-sssi, SαS random processes proved by using theory of distribution. These results are used to prove assumptions (10), (11) for the four examples in Section 3 and to establish the estimator for α. Let (S, µ) be a measure space, h, g ∈ L α (S, µ) and M be a symmetric α-stable random measure on S with control measure µ, α ∈ (0, 2). Set Let β ∈ C such that Re(β) ∈ (−1, 0). Let Ee iUy |y| β+1 dy (16) in sense of distribution, where T =| x | β and F T is Fourier transform of T .

Proofs
The following lemmas are used in the proofs of Theorems 4.1-4.2.
Proof. Let f (x) be the density function of X. Since X is a SαS -stable random variable, X has a density function f (x). Then f (x) is continous and f (x) ≥ 0, ∀x ∈ R.
X is symmetric then f (x) is an even function f (x) = f (−x).We first consider the case β ∈ R and −1 < β < 0. When −1 < β < 0, we can write: So we have the conclusion.
Proof. For β ∈ C, −1 < Re(β) < 0, following example 5, §7, chapter VII of [17], then T is a distribution and it has Fourier transform F T (y) = C | y | −(β+1) , where C is a constant. We will find C using function k(x) = e −x 2 /2 . Since T ∈ L 1 loc (R) and k ∈ S(R), in sense of distribution, we have FT, k = T, F k . On the other hand, Changing variable, we get .

Lemma 4.11. Let ψ be a function in the Schwartz class,
. Then for all x = 0, there exists a constant C > 0 such that for all ǫ > 0, we have Proof. Set k(y) =| T (x − y) |, then for the first inequality, we need to prove that Since lim x→∞ F (x)ψ(x) = 0 and F (0) = 0, we have We also have ψ is a function in the Schwartz class, then Then we get the first inequality. Now we prove for the second one. We first consider the case Then we get the second inequality.
Using the change of variable t = ǫy, then dt = ǫdy. We get We write We consider I 1 and I 2 . Since ψ is a function in Schwartz class, it follows || ψ || ∞ < ∞, || ψ || 1 < ∞. Then Since then Now we consider I 2 . Since ψ is a function in Schwartz class, there exists a constant C > 0 such that for | t |≥ 1 then | ψ(t) |≤ C t 2 . We choose ǫ > 0 such that δ ǫ ≥ 1. Then by changing variable t = y/ǫ, we get We have By changing variable u = ǫt − x, we have Here C 1 is a constant depending on δ.
where C 2 is a constant depending on δ. Similarly, since δ/ǫ ≥ 1, we get where C 3 is a constant depending on x, δ. So we get I 2 ≤ Cǫ where C is a constant depending on x, δ. We can choose ǫ small enough to get I 2 ≤ θ 2 . Then for all θ > 0, there exists ǫ 0 such that for all 0 < ǫ < ǫ 0 , we have | I |≤ θ. Therefore we get the conclusion.
Proof of Theorem 4.1 . From (16), we have where In sense of distribution, we have (using Fubini theorem since F T, F −1 f ∈ L 1 loc (R), ϕ ǫ ∈ C ∞ 0 (R) and ϕ ǫ is an even function). Now we would find the limits of two sides of the equation (22) when ǫ → 0. We first consider the left hand side of (22). We have Then for Thus applying Lebesgue dominated convergence theorem, the left hand side of (22) converges to Turning back to the right hand side of (22), applying Lemma 4.9, we get pointwise almost everywhere. Following Lemma (4.8), we also have since Re(β) ∈ (−1, 0). Then we apply Lebesgue dominated convergence theorem again, the right hand side of (22) converges to R F −1 f (y)F T (y)dy. So we have (16).
Let µ be the distribution of random vector (U, V ), then µ is a probability measure on R 2 .
Now we consider the right-hand side of (23). We have We can write Since convolution of two functions in Schwartz class is a function in Schwartz class, we get ψ ǫ * ψ 1/ǫ ∈ S(R). So following Lemma 4.11, we have Since then, there exists a constant C > 0 such that Similarly, we also get | FF 2ǫ (y) |≤ C | FT 2 (y) | . Then Let us recall that R Then applying Lemma 4.12 and Lemma 4.13, it follows lim Similarly, we get lim . From Theorem 4.1, Lemma 4.5 and Lemma 4.10 we can deduce By Lebesgue dominated convergence theorem, as ǫ → 0, the right-hand side of (23) converges to Now we consider the left-hand side of (23). Since lim Then we can apply Lebesgue dominated convergence theorem for the left-hand side of (23). As ǫ → 0, it converges to So we get (17). Now we prove (18). Following Theorem 4.1 and Theorem 4.2, for −1/2 < Re(β) < 0, we get Applying Lemma 4.10, we get (18).
Proof of Lemma 4.1. For case α = 2, let Y be a standard S2S variable. Then for −1 < Re(β) < 0, we have Let us now study the case α = 2. Following (20) and Theorem 4.1, we have Using the change of variable y α = t, then Using the property Γ(x + 1) = xΓ(x), we have .
We get Using L'Hôpital rule, we get: Then we continue using L'Hôpital rule for the remaining limit: We get where P ρ (x, y) is a fourth degree polynomial that depends continuously on ρ. We also have A Taylor expansion up to order 2 leads to withρ ∈ (0, ρ). On the compact set | ρ |≤ 1/2, the polynomial P ρ (x, y) can be bounded by a fourth degree polynomial P (x, y), for all x, y ∈ R, | P ρ (x, y) |≤| P (| x |, | y |) |. Moreover But with |ρ |≤ 1/2, we get|ρ 1−ρ 2 |≤ 2/3. So exp ρxy Because the power function grows faster than the polynomial function, we have So we have the conclusion.

Results used for the estimation of α
Here we establish some basic results used to get the estimation of α. Lemma 4.14. g u,v is a strictly decreasing function on (0, +∞) and (25)
Proof of Corollary 4.3. The result is induced directly from Lemma 4.14 and the inverse function theorem.

Results
In this part, we give some lemmas related to the rate of convergence in main results. From basic analysis, we get the following lemma: We can prove easily the two following lemmas from Lemma 2.12 in [18].
Let X n be random variables whose ranges lie in D such that X n P − → a and X n − a = O P (b n ) where {b n } n is a non-negative sequence and b n → 0 as n → +∞.
where {b n } n is a non-negative sequence and a n → 0 as n → +∞.

Proofs
Proof of Lemma 4.16. Set  If −1 < p < 0, we take a constant ǫ such that 0 < ǫ < −p, then Since p + ǫ < 0, we get  [18], p. 12, and applying Lemma 2.12 in [18] for R(x) = f (x+a)−f (x)−xf ′ (a) and the sequence of random variables X n − a, we get We have the conclusion. f is differentiable at (a, b), we can write

Proof of Lemma 4.18. Since
Similarly with the proof of Lemma 4.17, by applying Lemma 2.12 in [18] for and the sequence of random vectors (X n − a, Y n − b), we get the conclusion.

Variances
In this part we will make some calculations for variances of limit distributions of central limit theorem for fractional Brownian motion and SαS−stable Lévy motion (Theorem 3.2).

Fractional Brownian motion
Let Then Z k ∼ N (0, 1), {Z k } k≥0 is a centered stationary Gaussian family and for k, l ≥ 0, Here ρ(0) = 1. Let (42) It is obvious that Following Proposition 1.4.2-(iv) in [11], we can write f, g in terms of Hermite polynomials in a unique way: where f q , g q ∈ R and d is the minimum of the Hermite ranks of f and g.

Proofs of the main results
In this section, we shall denote by C a generic constant which may change from occurrence to occurrence.
Using the fact that {X t } t≥0 has independent increments, we get X(l + p ′ ) − X(l), X(k + p) − X(k) are independent for all p, p ′ = 0, K since 0 ≤ l ≤ l + p ′ ≤ k ≤ k + p. It follows that △ k,1 X and △ 0,1 X are independent for |k| ≥ K.
To prove the results for H, we will prove first that for all n ∈ N, n > 2K, converges in distribution to a normal distribution as n → +∞.
We can see that if x > K + r then ½ Ci (x, r) = 0 for all i = 0, K, therefore f 0 (x, r) = 0. If x < k − r then ½ C k+i (x, r) = 0 for all i = 0, K, therefore f k (x, r) = 0.
Then we get the condition (11). Applying Theorem 3.1, it follows W n (X) − E|△ 0,1 X| β = O P (b n ) and where b n is defined as in (14).