An estimation of the stability and the localisability functions of multistable processes

Multistable processes are tangent at each point to a stable process, but where the index of stability and the index of localisability varies along the path. In this work, we give two estimators of the stability and the localisability functions, and we prove the consistency of those two estimators. We illustrate these convergences with two classical examples, the Levy multistable process and the Linear Multifractional Multistable Motion.


Introduction
Multifractional multistable processes have been recently introduced as models for phenomena where the regularity and the intensity of jumps are non constant, and particularly when the increments of the observed trajectories are not stationary. In Figure 1, we display a path of a financial data from federal funds, which jumps are more or less marked. The multistable processes then extend the stable models in order to take account this additional variability (see Figure 2 for an example of a realization of such a process, computed with the simulation method explained in [4]). We describe then some events with a low intensity of jumps at some times, which may be very erratic at other times. We provide an other example of application in Figure 7 of Section 4.3, where we consider a path coming from ECG data.
Multistable processes are stochastic processes which are locally stable, but where the index of stability varies with "time", and therefore is a function. They were constructed in [4,5,6,8] using respectively moving averages, sums over Poisson processes, multistable measures, and the Ferguson-Klass-LePage series representation, this last definition being the representation used here after. These processes are, under general assumptions, localisable, that is they are locally self-similar, with an index of self-similarity which is also a Figure 1: Financial data where the increments do not appear to be stationary : the intensity of jumps is varying over time. function. The aim of this paper is to introduce an estimator for each function, the index of stability and the self-similarity function.
Let us recall the definition of a localisable process [2,3]: Y = {Y (t) : t ∈ R} is said to be localisable at u if there exists an h(u) ∈ R and a non-trivial limiting process Y ′ u such that where the convergence is in finite dimensional distributions. When the limit exits, Y ′ u = {Y ′ u (t) : t ∈ R} is termed the local form or tangent process of Y at u, and when the convergence is in distribution, the process is called strongly h-localisable.

Ferguson-Klass-LePage series representation
We define now the multistable processes using the Ferguson-Klass-LePage series representation, that are defined as "diagonals" of random fields that we described below. In the sequel, (E, E, m) will be a measure space, and U an open interval of R. We will assume that m is a finite measure as well as a σ-finite measure. Let α be a C 1 function defined on U and ranging in [c, d] ⊂ (0, 2). Let f (t, u, .) be a family of functions such that, for all (t, u) ∈ U 2 , f (t, u, .) ∈ F α(u) (E, E, m). We define also r : E → R + such that m(dx) = 1 r(x) m(dx) is a probability measure. (Γ i ) i≥1 will be a sequence of arrival times of a Poisson process with unit arrival time and (γ i ) i≥1 a sequence of i.i.d. random variables with distribution P (γ i = 1) = P (γ i = −1) = 1/2. Let (V i ) i≥1 a sequence of i.i.d. random variables with distributionm on E and we assume that the three sequences (Γ i ) i≥1 , (V i ) i≥1 , and (γ i ) i≥1 are independent. As in [8], we will consider the following random field: Note that when the function α is constant, then (1.2) is just the Ferguson -Klass -LePage series representation of a stable random variable (see [1,7,10,11,14] and [15,Theorem 3.10.1] for specific properties of this representation).

Multistable processes
Multistable processes are obtained by taking diagonals on X defined in (1.2), i.e.
Indeed, as shown in Theorems 3.3 and 4.5 of [8], provided some conditions are satisfied both by X and by the function f , Y will be a localisable process whose local form is a stable process. The aim of this work is to estimate both functions of stability α and localisability h. Given one trajectory of a multistable process, we provide an estimator for each function and we obtain the convergence in all the spaces L p for the two estimators. We illustrate these convergences with two classical examples, the Levy multistable process and the Linear Multifractional Multistable Motion.

Construction of the estimators
Let Y be a multistable process defined in (1.3). The estimation of the localisability function and the stability function is based on the increments (Y k,N ) of Y . Define the sequence (Y k,N ) k∈Z,N ∈N by Let t 0 ∈ R fixed. We introduce an estimator of H(t 0 ) witĥ where (n(N)) N ∈N is a sequence taking even integer values. We expect the sequence (Ĥ N (t 0 )) N to converge to H(t 0 ) thanks to the localisability of the process Y . For the integers k and N such that k N is close to t 0 , − log |Y ′ t 0 (1)| when N tends to infinity and k N tends to t 0 . We regulate the sequence (Z k,N ) near t 0 using the mean 1 n(N ) Z k,N and we can expect this sum will be bounded in the L r spaces to obtain the convergence with a rate 1 log N . The convergence is proved in Theorem 3.1.

Main results
The three following theorems apply to a diagonal process Y defined from the field X given by (1.2). For convenience, the conditions required on X and the function f that appears in (1.2), denoted (C1), . . . , (C14), are gathered in Section 6. Theorem 3.1 lead to the convergence in the L r spaces of the estimator of the localisability function H, while the two Theorems 3.2 and 3.3 draw to the convergence of the estimator of the stability function α. Then, for all t 0 ∈ U and all r > 0,

Approximation of the localisability function
Proof See Section 5.

Approximation of the stability function
We first give conditions for the convergence in probability of S N (p) in Theorem 3.2, which is useful to establish the consistency of the estimatorα N (t 0 ). • The process X(., t 0 ) is H(t 0 )-self-similar with stationary increments and H(t 0 ) < 1.

Proof
See Section 5.
Let Y a multistable process. Assume the conditions of Theorem 3.2, then, for all t 0 ∈ U and r > 0, lim Proof See Section 5.

Examples and simulations
In this section, we consider the "multistable versions" of some classical processes: the αstable Lévy motion and the Linear Fractional Stable Motion. We provide then an example of application with ECG data. We first recall some definitions. In the sequel, M will denote a symmetric α-stable (0 < α < 2) random measure on R with control measure Lebesgue measure L. We will write for α-stable Lévy motion, and we will use the Ferguson-Klass-LePage representation, The following process is called linear fractional α-stable motion: When b + = b − = 1, this process is called well-balanced linear fractional α-stable motion and denoted L α,H .
The localisability of Lévy motion and linear fractional α-stable motion simply stems from the fact that they are 1/α-self-similar with stationary increments [3].
We now apply our results to the multistable versions of these processes, that were defined in [4,5].

Symmetric multistable Lévy motion
and the symmetric multistable Lévy motion

Proof
We know from [9] that all the conditions (C1)-(C14) are satisfied. We deduce from -self-similar with stationary increments [15]. We then prove that the condition (C*) is satisfied.
We conclude with Theorem 3. 3 We display on Figure 3 some examples of estimations for various functions α, the function H satisfying the relation H(t) = 1 α(t) . The trajectories have been simulated using the field (4.4). For each u ∈ (0, 1), X(., u) is a α(u)-stable Lévy Motion. It is then an α(u)-stable process with independent increments. We have generated these increments using the RSTAB program available in [16] or in [15], and then taken the diagonal X(t, t).
Each function is pretty well-evaluated. We are able to recreate with the estimators the shape of the functions. However, we notice a significant bias on Figure 3 in the estimation of H. It seems to decrease when H is getting values close to 1. We observe this phenomenon with most trajectories, while the estimatorα seems to be unbiased. We have displayed the productαĤ in order to show the link between the estimators. We actually find again the asymtpotic relationship H(t) = 1 α(t) . We observe on Figure 4 an evolution of the variance in the estimation of α. It seems to increase when the function α is decreasing, and we conjecture that the variance at the point t 0 depends on the value α(t 0 ) in this way. In fact, the increments Y k,N are asymptotically distributed as an α(t 0 )-stable variable, so we expect that S N and R exp have a variance increasing when α is decreasing.
We have increased the resolution on Figure 5, taking more points for the discretization. The distance observed on Figure 4.b for α near 1 is then corrected.
Let ǫ > 0. Let c 0 > 0 such that We show on Figure 6 some paths of Lmmm, with the two corresponding estimations of α and H. To simulate the trajectories, we have used the field (4.5). All the increments of X(., u) are (H(u), α(u))-linear fractional stable motions, generated using the LFSN program of [16]. After we have taken the diagonal process X(t, t).
These estimates are overall further than the estimates in the case of the Levy process, because of greater correlations between the increments of the process. However, the estimation of H does not seem to be disturbed by those correlations. The shape of the function H is kept. For α, we notice some disruptions when the function is close to 1. We finally show an example where the estimation of α is not good enough in the last line of Figure 6. The trajectory, Figure 6.a), seems to have a big jump, which leads to decrease the estimatorα, represented on Figure 6.b), while the jump is taken account in the n(N) points. The estimation of H, represented on Figure 6.c), does not seem to be affected by this phenomenon.

Application to ECG data
We consider an example of trajectory with a varying index of stability and a varying index of localisability. The dataset comes from [12]. We denote Z the process corresponding to an ECG records. Its length is N = 1000000 points. We consider then the process Y defined by The realization of process Y associated to EGC series is represented in Figure 7. The increments of this process can not be regarded as stationary. We see in this example that the smoothness, as the intensity of significant jumps, is actually varying with time. Figure 7: Trajectory of the process Y associated to ECG series with N = 1000000 points.
We have done an estimation of the localisability function H for this process Y . Figure  8 represents an estimation of H as function of t. The estimate of H is calculated by taking n(N) = 25000 points. We notice a correlation between the noisy areas of the trajectory and the times when the exponent H is small, and also a greatest exponent when the trajectory seems to be smoother. For the estimation of the function α, we have taken n(N) = 25000 too. The result is presented in Figure 9. We observe also here a link between the noise and the function α. When the intensity of the significant jumps of the trajectory is high, the stability function is close to 2. A lower stability index matches to a period with a lower intensity of significant jumps.  Note that it is sufficient to prove the result of Theorem 3.1 for r ≥ 1 since the convergence in L p implies the convergence in L q for all q < p. Let r ≥ 1. Let H satisfying the condition (C5). We writê Then, there exists a constant K r ∈ R depending on r such that H is continuously differentiable and lim  To conclude, it is sufficient to show that there exists a constant K ∈ R depending on t 0 and r such that for all N ∈ N, E | We consider first I 1 N (t).
With the conditions (C1), (C2) and (C3) (or (C1), (Cs2), (Cs3) and (Cs4) in the σ-finite space case) we can apply Proposition 4.9 or 4.10 of [9] : there exists K U > 0 such that for all (u, v) ∈ U 2 and x > 0, With the same arguments, Let η < c. The Markov inequality gives and Property 1.2.17 of [15] E |X( . With the condition (C9), there exists K > 0 such that for all N ≥ N 0 and all t ∈ V , Finally, there exists K > 0 such that for all N ≥ N 0 and all t ∈ V , I 1 N (t) ≤ K. Using the equation (5.7) and the condition (C9), we obtain that there exists K > 0 such that for all N ≥ N 0 and all t ∈ V , so there exists K > 0 such that for all N ≥ N 0 and all t ∈ V , We have, for p ≤ 1, and for p ≥ 1, To prove Theorem 3.2, it is enough to show that A N (p) Let U an open interval satisfying the conditions of the theorem and t 0 ∈ U. We can fix N 0 ∈ N and V ⊂ U an open interval depending on t 0 such that for all N ≥ N 0 and all t ∈ V , , and such that the inequality (5.6) holds.
Let u > 0. We know from (5.6) that there exists K U > 0 such that for all t ∈ V , There exists K U,p > 0 such that (5.8) Since α is a continuous function, we can fix U small enough such that c = inf t∈U α(t) > p.
Under the condition (C*), we can apply Theorem 2.1 of [13]: there exists a positive constant C such that Since the process X(., t 0 ) is H(t 0 )-self-similar with stationary increments, the constant C does not depend on k, j. We then obtain the existence of a positive constant C p,c 0 depending on p, c 0 and x such that Since lim N →+∞ n(N) = +∞ and lim j→+∞ E |h 0,t 0 (x)h j,t 0 (x)| α(t 0 ) 2 m(dx) = 0, we conclude with Cesaro's theorem that there exists N 0 ∈ N such that for all N ≥ N 0 , g is a continuous function on (0, 2], with g(0) > 0, g(2) > 0. The only solution of the equation g(α) = 0 is α(t 0 ). Moreover, lim Then, there exists K α(t 0 ) a positive constant depending only on α(t 0 ) such that: We estimate now |g(α N (t 0 ))|.