Visual information and expert’s idea in Hurst index estimation of the fractional Brownian motion using a diffusion type approximation

An approximation of the fractional Brownian motion based on the Ornstein-Uhlenbeck process is used to obtain an asymptotic likelihood function. Two estimators of the Hurst index are then presented in the likelihood approach. The first estimator is produced according to the observed values of the sample path; while the second one employs the likelihood function of the incremental process. We also employ visual roughness of realization to restrict the parameter space and to obtain prior information in Bayesian approach. The methods are then compared with three contemporary estimators and an experimental data set is studied.


General motivation
Fractional Brownian motion (fBm) appears in modeling wide classes of non-stationary stochastic processes. The statistical self-similarity and the ability of to fine-tune the order of Hölder continuity are the famous advantages of this process that make it typically one of the greatest interest in modeling natural phenomena. Typically, the creation of the fBm is attributed to ref. 1 since it investigated the basic properties of fBm and stressed its role in modeling of natural phenomena. However, it had been introduced during the generation of Gaussian spirals in Hilbert space 2 be an fBm that is a Gaussian zero-mean continuous process with stationary increments and the homogeneous stationary incremental variance function for any t, s > 0. Consequently, the covariance function is where 0 < H < 1, and σ > 0 are the parameters of the process. σ is a scale parameter and we then assume that σ = 1. Therefore, the interesting parameter is H, called fractal or Hurst (by Benoit B. Mandelbrot) index, in honor of Harold E. Hurst (for spending 62 years in Egypt carrying out a project on the hydrology of the Nile river) and also in honor of Ludwig O. Hölder. The features of H are more far-reaching than a parameter of covariance function. For instance, we know that the differentiability of a process indicates the smoothness of the corresponding realizations. According to the last term of (1), fBm is not differentiable in  2 , while it satisfies the Hölder continuity of order H and it has derivatives of order α for any 0 < α ≤ H. Thus, an fBm with greater H induces the smoother realizations and this is our general theoretic motivation for considering H as the crucial parameter of the process. Incremental stationary Gaussian processes with a Hölder continuity of order H construct an important class of H-index processes 3 . The most important feature of this family is the closed form of Hausdorff or fractal dimension for the graph of

Practical motivation
We discussed above the theoretical motivation for estimating H and now we briefly explain the enthusiasm of physicists and environmental scientists for H. Measuring the smoothness is an inseparable part of almost all surface analysis investigations. Due to the power and scaling laws, a variety of high resolution observations has been studied as a realization of fractal random fields. The Lévy fractional Brownian sheet (LfBs) of fractal index H is usually appointed to represent fractal behavior. The zero-mean Gaussian random field L H (t) is an LfBs of index H if There are many methods to estimate the fractal index H based on the d-dimensional data of realization, but using almost all of them requires both computational and technical skills. A discussion on this subject could be found in ref. 5 and the references therein. Using the line transect data, we hope to decrease the computational cost of estimating H by reducing the dimension of data which tends to estimating H based on one-dimensional processes. For clarity, suppose that a picture has been taken of a surface (d = 2) and the observation is gathered in an r × c matrix. We are looking for an estimation of H that uses the data as c samples of length r. The question is: 'is it possible to consider the data matrix as c sample paths from an fBm of index H ? ' To answer this question, note that the Hausdorff dimension of the graph of L H (t) is equal to d + 1 − H. Extract the simple one dimensional stochastic process where c′ > 0 does not depend on t, and hence Z is a one-dimensional index-H process 3 . Thereupon, it is helpful to look at the line-transect data of the realizations of LfBs when the aim of the study involves only the estimation of the fractal index and/or measuring surface roughness.
The estimators of H have often been made for estimating the roughness of surfaces using the line transect data. Several methods have been proposed to estimate H, the oldest among them is R/S-statistic introduced by ref. 6. Further results for this estimator were provided by ref. 7. In ref. 8 a robust estimation of H has been introduced for stable distributions. It has been shown that this estimator is not efficient in comparison with the ML estimators in the Gaussian case. Perhaps the first serious attempt to make the ML estimate in image textures is provided by ref. 9. One may find a Bayesian solution for the same problem in ref. 10. Using the asymptotic behavior of the k-th absolute moment of discrete variations of a sampled path, a class of consistent estimators of the Hurst index has been constructed 11 . Based on the Bahadur representation for sample quantiles of nonlinear transformations of Gaussian processes, another consistent estimator of the Hurst index for the general class of locally self-similar Gaussian processes has been presented 12 . Generally, there is a class of estimators based on a method, called discrete variations, in which the oscillation of a quantity is employed in the estimation procedure. In statistical literature, refs 13 and 14, simultaneously originated this method and it has been highlighted many times in the other works 11,12,15 . There also exists a consistent estimator of H, based on the Karhunen-Loève expansion of a Gaussian process 16 . We compare our estimators with the estimators of refs 11, 12 and 15.

Decomposition of fBm.
The mentioned representation in (3) induces the idea of achieving a diffusion process based on the stochastic integration. Since the solution of an Itô integral is a semi-martingale and since B H , as the solution of the stochastic integral in (3), is not a semi-martingale for H ≠ 1/2, the idea of constructing an exact diffusion representation is canceled. On the other hand, there is a series of diffusion processes, ∑ = Y t ( ) n N n 1 , such that it converges 4 to B H (t). The process {Y n (t)} is the answer of an Itô integral and strongly depends on the value of H. We would like to compute the likelihood function using the joint distribution of the diffusion processes {Y n (t)} t but for finite N. Precisely, the series decomposition for B H (t) is obtained from the following convergences: Clearly, for given t ∈ (0, 1) and n ∈ {1, … , N}, Y 2n (t) and Y 3n (t) are defined as Based on the mentioned convergences in (4) and (5), we would like to construct the likelihood function by applying the Euler discretization on the diffusion processes. In the sequel denote the global parameter space (0, 1) by Θ and denote the restricted parameter spaces (0, 1/2] and (1/2, 1) by Θ 1 and Θ 2 , respectively. One may suggest simpler approximations 18 where the fBm is approximated using a semi-martingale process but the approximation does not have a diffusion-type representation. The presented approach strongly depends on this representation and therefore substitution of the approximation mentioned by ref. 4 with other approximations may lack the procedure of likelihood computation. Estimation of H: simple use of convergences. Assume that a sample path of B H (t) is observed at discrete times t i ∈ [0, 1) for i = 1, … , m, where m is not a prime number. The restriction of time index into the interval t ∈ [0, 1) is somehow a loss of generality. But, using the self-similarity of the fBm, it is possible to rescale the sample path into this interval. In fact, this assumption is important in computing the variance of Y jn , for j = 1, 2 and 3. Moreover, only for this subsection let m = kp, where  ∈ k p , . A very elementary approach is to consider the B H (t i ) as a summation of independent Gaussian random variables. Proposition 1. Let λ in (t, s) denotes cov(Y in (t), Y in (s)) for i = 1, 2, 3, and n = 1, … , N. For the fBm, B H , the covariance between two arbitrary points 0 < s < t satisfies  The proof of this proposition is grouped with all the other proofs in the supplementary file. For a large enough N, the left-hand sides of (6) and (7) return an approximation of the covariance function of the fBm in terms of the mentioned series decomposition. We now consider the sample path as a dependent sample of size k, from a p-variate zero-mean normal random vector with covariance matrix  for i = 0, … , p − 1 and the equations do not depend on j. The covariances are replaced from the left hand side of (6) or (7) when H ∈ Θ 1 or H ∈ Θ 2 , respectively. This equation produces p different values for the ML estimate of H, , and one may use the average of all the resulting solutions of (8) for various values of i to obtain a new consistent estimator. To this end, we propose the estimator where w is an appropriate weight function. The idea of using the weighted average comes from the numerical properties of  H i . Numerically, those  H i 's that correspond to the larger values of i return better estimates for H. It is worth mentioning that, although the larger values of H eventuate smoother realizations for the fBm, the error of approximation becomes larger for H > 1/2. Hence, the conjecture of better results for smoother realizations may not be true which is due to the weakness of convergence when H > 1/2. do not constitute a stationary process; however, for large enough N, they converge in  2 to the stationary incremental process of fBm. Using the second approach, we see that the appearance of H in the likelihood becomes more suitable via the smoothing effect of difference operator. This may accelerate the numerical methods of the maximization of likelihood.

Estimation of
Let B H denotes the vector of sample path (B H (t 1 ), … , B H (t m )) T . Also, let t  be the standard filter generated by the Brownian motion that is the smallest σ-field induced by {B (n) (s), s ≤ t}. We attempt to figure out the and {B H (t)} is a quadratic variation process. Thus, we get the convergence Convergence in  2 implies the convergence in distribution and we then look at the increments as the limit of the bi-indexed stochastic process The following Theorem helps us to approximate the distribution of the increments of a sample path of fBm with Hurst index H ≤ 1/2.
with the non-stationary covariance function : The same approach is employed to calculate the ML estimate of fractal index when the parameter space is Θ 2 and we denote the induced likelihood function by L 2 (H|B H ).
with the non-stationary covariance function as same as the process as N → ∞ where (Z 1n (·), … , Z 4n (·)), n = 1, … , N are independent zero-mean Gaussian random vectors and the covariance elements are given through the equations (22) A criticism is the complicating form of the resulting likelihood as a function of H. One may claim to compute the exact likelihood using (1). Note that the direct use of (1) yields an in-differentiable function of H; while for H > 1/2, β n does not appear in the approximation of the likelihood function. Thus, the target function for maximization belongs to the class of C (0,1) 1 functions and is smooth enough for using numerical methods in computational procedures.
Bayesian discussion. The approximation of the likelihood function depends on whether H ≤ 1/2 or not.
According to (1), the fBm is short-range dependent when H ∈ Θ 1 and it becomes a long-range dependent process Scientific RepoRts | 7:42482 | DOI: 10.1038/srep42482 as well as H crosses the threshold 1/2. Thus, the realization of fBm for H ≤ 1/2 has a rough graph; while it has a low oscillation with a polynomial trend when H ∈ Θ 2 . This visually difference in the short-range and long-range dependent fBm's may help us in constraining the parameter space to accelerate the calculation of ML estimate or to employ priori information for Bayesian inference. Based on the type of dependence, we constrain the parameter space, Θ , into one of the sub-intervals Θ 1 or Θ 2 . Consequently, the class of prior distributions is categorized into the set of distributions with support on Θ 1 or Θ 2 .
The procedure of using a priori information is quite simple. We need a likelihood function as the joint distribution of observations given the unknown parameter H and a prior distribution. The likelihood function could be explained in terms of the sample path B H or its increments, D. Simply speaking, one can obtain the likelihood function in terms of B H by employing Proposition 1 or calculate the likelihood in terms of increments by Theorems 1 and 2. Both methods express the likelihood function for H ≤ 1/2 and H > 1/2, separately.
Let L 1 (·|B H ) and L 2 (·|B H ) denote the likelihood functions in terms of B H for H ∈ Θ 1 and H ∈ Θ 2 , respectively (just like the notation which used in (11)) and define L 1 (·|D) and L 2 (·|D) in the same way. Let f D (·|H) denotes the sample distribution of increments that is rewritten as where I A (·) is the indicator function. Although there is no conjugate prior distribution, reducing the parameter space may change the mixture likelihood into the simple multivariate normal. Experts opinions and conjectures on the sample path produce the prior information and may reduce the parameter space. For instance, consider a realization of an fBm with fractal index H = 1/4 which the short-range dependence property is very obvious (Fig. 1). In this case, the expert opinion hopefully suggests to consider H in the interval (0, 1/2] and so the second term in (13) is omitted. According to the bounded parameter space, we deduce the use of 1 − 1/2 Beta (a, b) and 1/2 Beta (a, b) as a priori distributions for long and short-range dependent sample paths, respectively. When the determination of the range is not possible by visual evidences, using a prior distribution on (0, 1) inevitably yields to working with the mixture form of (13). In this situation, we use the Beta (a, b) distribution as the prior. Moreover, the suggested priors could be replaced by the uniform distribution as a non-informative prior. The rest is using good candidates for the proposal distributions and performing the MCMC algorithm to report the posteriori issues. The implementation and numerical results of the Bayesian approach are discussed in the following section.

Results
We separate this section into three main parts. First, the numerical behavior of the likelihood based estimator is studied and then we explain the procedure of Bayesian approach. The third part contains the computation of estimators for a real data.
Likelihood computation. The consistency of the estimators strongly depends on the mean squared error (MSE) of the approximation of fBm. The structure of the two dimensional Brownian motions in the case H > 1/2 could be taken into account to effectively express the terms of series representation. Although the sample path is smoother in the case H > 1/2, the MSE of the approximation of the fBm is greater in comparison with the case H ≤ 1/2. There is another approximation for fBm that is more effective 4 than (5) when H ∈ Θ 2 . However, it is difficult to obtain a diffusion type process using that approximation. Therefore, we continue the simulation study with the performed likelihoods in previous sections.
Using the circulant embedding method 19 , we first generate the sample path b = (b 1 , … , b m ) T of size m = 256 from an fBm with known index H. We only implement the simulation study for regular observations. The likelihood function given the observed sample path is It can be seen that the estimators based on γ  i m ( / ) become more precise for larger values of i. In one hand, the replication of this simulation study shows less bias but greater variance for greater values of i. On the other hand, we found less variance but greater bias for estimators based on γ  i m ( / ) when i is small. This is the basis of using the weighted estimator (9). Although one may use the cross validation method to construct a better weight function, for this simulation study we partition off the vector  H into two vectors with p/2 elements and the following weight function is considered for this simulation study The second column of Table 1 shows the MSE of  H c when the unknown parameter H is attributed to one of the intervals (0, 1/2] or (1/2, 1). Each reported MSE in this paper is computed by replication of estimation procedure for 100 times.
256 is computed according to (10) when the generated b is a sample path of fBm of index 0 < H ≤ 1/2. Fortunately, the smoothing role of differentiation operator makes it possible to maximize the likelihood by employing the simple Particle swarm optimization (PSO) via R package psoptim. The implementation of this procedure for 1/2 < H < 1 is not as simple as before. For convenience, let us denote the right hand sides of (A.5)-(A.8) in the supplementary file by A 5 (t, s) − A 8 (t, s), and the right hands of (A.9) and (A.10) by A 9 (t) and A 10 (t), respectively. According to the Theorem 2, the sub-diagonal elements σ ij , j < i for 1/2 < H < 1 is obtained by for i = 2, … , 256, j = 1, … , i − 1 and the top-diagonal components are computed using the symmetric behavior of Σ inc . Furthermore, the diagonal elements, σ ii , are computed as follows: for i = 1, … , 256. The covariance matrix Σ inc is now acquired and the nlminb could find the maximum of likelihood function, numerically. The results of this procedure is shown in the 3rd column of Table 1.
In the procedure mentioned above we have known that H belongs to Θ 1 or Θ 2 . When there is no such information about the parameter, the likelihood function compensates this uncertainty. Let us explain this part for  H c . The likelihood function for this problem is 2 that is equivalent to the maximization of L 1 and L 2 separately and then we pick out the one with the greater value. In this way, we allocate the unknown parameter to one of the intervals along the estimation procedure. We can simply modify this method for the incremental approach by replacing Σ l by Σ l inc ( ) , that is the covariance matrix of increments when H ∈ Θ l for l = 1, 2. The results are gathered in the second and third columns of Table 2 and in comparison with  Table 1, the MSE of estimators are greater and this shows the effect of more precise parameter spaces used in the previous simulation study.
Bayesian computation. The Bayesian computation begins with the multiplication of the likelihood function by the prior distribution. Using the mentioned method in 1, the likelihood function computation is straightforward. Remaining is the selection of prior distribution and then the posterior generation using MCMC algorithm. Again let b as a sample path of fBm and the parameter space is Θ 1 or Θ 2 . The suggested prior distributions for H ∈ Θ 1 and H ∈ Θ 2 are respectively 1/2 Beta (a, a) and 1 − 1/2 Beta (a, a) which both return non-informative uniform prior when a = 1. Therefore, the posterior distribution for the parameter space Θ l , l = 1, 2, satisfies  where Φ is the cumulative distribution function of a standard normal random variable. To obtain the true random sample we eliminate the first 2500 generations and for each parameter setting we generate a sample of size 10000 from the corresponding posterior distribution. The results for a = 1 and 3 are gathered in the last two columns of Table 1. Bayesian computation for the general parameter space Θ = (0, 1) is very similar. In this case, we use Beta(a 1 , a 2 ) as the prior distribution. When the observed sample path, b, shows an obvious short range dependence behavior, then let a 1 < a 2 and for long range dependence employ a 2 < a 1 . Using the notation of (15), the posterior is is the normalizing constant. The proposal density q(·|H j−1 ) is a normal distribution with mean H j−1 and variance 1/16 which is truncated at zero and one. Thus, to obtain the jth observation from the posterior, generate an observation from density q(·|H j−1 ) and accept it with probability η min (1, ) where  Table 2. The results for H = 0.45 and H = 0.55 were obtained by use of Beta distribution with parameters a 1 = a 2 = 3 as the prior distribution.
Both the Tables 1 and 2 show that the Bayesian estimator with informative prior returns better results particularly for short range dependent sample paths. In Table 1 and in comparison with Table 2, the MSE of estimators are smaller and this shows the effect of more precise parameter spaces. The error of decomposition and hence the risks of  H c and  H inc are decreasing in H. The fourth column of Table 2 represents the importance of visual information because it returns smaller MSEs in comparison with the last column. Concerning Table 1, to make the range-preserving property for ML estimates, whenever the values of the estimators were out of the parameter spaces, the results were reset to the nearest endpoint. Surprisingly, except for the cases H = 0.45 and 0.55, there was no need to reset the values. We also note that the  H c and  H inc have under and over estimation properties, Practical aspects. According to our real data, we are confronted with the same situation in the practical problem introduced in the section entitled Practical motivation. Firstly, note that a topographic map of region S is a collection of heights {Z(s, t)} at geographic coordinates (s, t) ∈ S. Typically, the coordinates are considered as nodes of a regular lattice. Thus, a route begins from a beginning point and is made by connecting some neighboring nodes to achieve the destination node. Precise topographic maps are often considered as fractal surfaces 5 and estimating the Hurst index is the most interesting problem for such data. We want to construct a road to move across the region S (from the North to the South or vice versa) and the smoothness of the road is the characteristic of interest. Figure 3 shows the aerial image of S that is a mountainous area from the Northwest of Iran restricted to a rectangle with two opposite vertices located at the geographic xy coordinates (621995.073, 4195182.082) and (650345.073, 4217502.082). The image is gridded into a 249 × 316 lattice. We would like to find the smoothest North-South route between two regions indicated by the orange and blue rectangles. According to the cost of computation: (1) The image is rescaled into a 83 × 158 lattice; (2) we restrict our study in the new lattice to those routes made by 260 nodes at most. The χ 2 goodness of fit test of fBm is implemented on the resulting route to insure the assumption of fractional behavior. We refer to ref. 20 for a list of goodness of fit tests including the χ 2 method.
There is a narrow valley at the points with the geographic longitude equal to 6.305 × 10 5 and we expect to detect the smoothest path near these coordinates. Each route is a sequence in the form of {Z(t i , s i )} i such that {(t i , s i )} i is a set of connected nodes. We first sort all the possible routes with respect to their total oscillation that is simply the sum of the absolute value |Z(t i , s i ) − Z(t j , s j )| for any two successive nodes (t i , s i ) and (t j , s j ). Then, we candidate half of the roads with smaller oscillations to find the smoothest path. If the expert idea allows us to attribute the Hurst index of these routes to Θ 1 or Θ 2 then we can use the beta-type prior to the estimate H; or else, the simple uniform prior on (0, 1) will be employed. In this case, it is suggested to employ a beta-type prior on Θ 1 . A decomposition of N = 100 is used to approximate the fBm and hence the likelihood function of each route. The smoothest route is the path with the larger estimated H. The smoothest detected path is shown with a solid red line in Fig. 3. This route is constructed by 136 nodes and the line transect of the route is provided in Fig. 4. It is worth mentioning that the estimated H parameter of this route is 0.42065 based on the Bayesian computation. The estimators based on the discrete variation, sample quantiles and trimmed mean respectively return the values 0.68479, 0.47103 and 0.42557 for this route.

Model diagnosis. The main characteristic of the fBm is the subdiffusion that is
H H 2 where 0 < H < 1, is the Hurst index or subdiffusive exponent and C H > 0 is diffusion constant. The class of subdiffusion processes contains three well-known stochastic processes; the fBm, continuous-time random walk (CTRW) 21 ; and diffusion on the fractal lattice (DFL) 22,23 . Although these processes belong to a certain class, their mechanism are rather different 24 . Applying the presented methods to any other process than the fBm will give some H which is misleading in the sense that the underlying process is not at all an fBm. In this section we want to test whether the underlying process is an fBm or not. To solve this, we first examine some general properties of the fBm and after obtaining positive results, we use the p-variation method 25 to discriminate the fBm from the CTRW and DFL. The outline of the algorithm is as follows 25 : • Verifying the stationarity of increments.
• Verifying the normality of the observed path.
• Testing the ergodicity of the increments.
• Testing the p-variation properties for some specific p's.
• Testing the filling ratio properties.
We have to apply this procedure to all the possible routes, but for the sake of convenience, we do this on some selected routes, and for the sake of conciseness, we demonstrate the results only for the smoothest detected route.
Let's start with the first step of the algorithm. The height of each pixel strongly depends on the height of its neighbors. This induces the unit root effect that causes the non-stationarity. A very familiar testing procedure to test the stationarity versus the unit root is KPSS 26 . Using the R package tseries, the KPSS statistic for the increments of the smoothest route is 0.33641 that produces the p-value equal to 0.1 and thus the incremental stationarity assumption is verified. Testing the stationarity is not restricted to this method and we refer to ref. 25 for another method that requires more sample paths; whereas we only have one realization and the method can not be implemented here.
Note that there are some reasonable effects that can deflect the trajectory of a subdiffusive process from the Gaussian assumption 27 . That is why we are going to test the normality assumption in the second step. There is plenty of normality testing methods including, Kolmogorov-Smirnov, Anderson-Darling, Shapiro-Wilk, Cramer-von Mises, Shapiro-Francia and Jarque-Bera. There is no certain order for these tests in terms of power, but some of them require the strong assumption of independence of the sample path; which is disaffirmed here under the subdiffusion model. Fortunately, the method of Jarque-Bera is not sensitive to the independence assumption and we employ it to test the normality of increments. We use the R package fBasics and we find out that the χ 2 -statistic of Jarque-Bera test is equal to 3.5049 and the corresponding p-value is 0.1733 which confirms the normality of the increments of the smoothest path.
Ergodicity states that the long time averages of physical characteristics provide the same information as the corresponding ensemble average. The ergodicity assumption can be violated for sub 28 and super 29,30 diffusive trajectories even under the power law. It is one of the most important parts of studies, particularly those on single particle tracking. Using the dynamical functional    Figure 5 shows the rate of convergence of the cumulative mean to zero. The range of oscillation around zero is very small for the large enough values of n that verifies the ergodicity of the smoothest path.
At this point, the smoothest route passed three steps of the algorithm and in the current step we need to attribute one of the fBm, CTRW or DFL to our data. To this end we use the p-variation method 33 to study the variational behavior of the process. Let V t ( ) k p ( ) denotes the 1/pth root of the incremental p-norm, i.e., where ∼ H is the estimated Hurst index produced by any proper method. Figure 6 exhibits the resulting values of V k p ( ) for p = 2 and p = 1/0.42065. The results verify that the smoothest rout does not follow the CTRW model and thus we remove the CTRW from our list.
We have to make a decision between the fBm model and the DFL in the final step. There is a testing procedure based on the mean maximal excursion (MME) method 34 which needs repeated trajectories from a single process. In some practical problems, the mechanism of data allows gathering independent trajectories of a subdiffusive model with a constant Hurst index 24 . However, in our study, each route is considered as a realization of a subdiffusive process with a corresponding Hurst index. Thus, restricting it to the smoothest route, we only have one trajectory that could be whether an fBm or a DFL. Fortunately, the MME method was modified to be implemented on a single trajectory to decide between the fBm and DFL 35 . Let S t denotes the total number of distinct visited sites of the process X(t) up to time t. The ratio S t X t ( )/ ( ) 2 is known as the filling ratio and we have to . This is the same behavior of any simulated sample path from an fBm of the same index and size. The panel (b) shows V k (2) with the same color setting of k. It is clear that the 2-variation is increasing in k and hence the CTRW model is rejected in favor of fBm. compute its slope in a log − log scale. The hypothesis of fBm is rejected versus DFL if the slope is far from zero. Figure 7 shows the values of the filling ratios in a double logarithmic scale and the estimated slope is − 4.79 × 10 −4 which implies that the logarithm of the filling ratio and time index are uncorrelated. This means that the dimension of the realization was constructed only by a fBm model and cannot be decomposed into an fBm and a fractal noise. So the DFL model is not verified and the final step in the fBm trajectory diagnosis algorithm is passed.