Necessary and suﬃcient conditions for limit theorems for quadratic variations of Gaussian sequences

: The quadratic variation of Gaussian processes plays an impor- tant role in both stochastic analysis and in applications such as estimation of model parameters, and for this reason the topic has been exten- sively studied in the literature. In this article we study the convergence of quadratic sums of general Gaussian sequences. We provide necessary and suﬃcient conditions for diﬀerent types of convergence including convergence in probability, almost sure convergence, L p -convergence as well as weak convergence. We use a practical and simple approach which simpliﬁes the existing methodology considerably. As an application, we show how convergence of the quadratic variation of a given process can be obtained by an appropriate choice of the underlying sequence.


Introduction
The quadratic variation of a stochastic process X plays an important role in different applications. For example, the concept is important if one is interested to develop stochastic calculus with respect to the process X. Furthermore, quadratic variation can be used to construct estimators for different parameters describing the process. For example, quadratic variation can be used in the estimation of the self-similarity index or in the estimation of a certain parameter describing the long range dependence, both of which are concepts that have important applications in fields of science such as hydrology, chemistry, physics, and finance to simply name a few. Both in stochastic analysis and in estimation one is interested in studying the convergence of the quadratic variation. Going beyond, we might want to obtain a central limit theorem which allows one to apply statistical tools developed for normal random variables.
For Gaussian processes the study of quadratic variation goes back to Lévy who studied one-dimensional standard Brownian motion (W t ) t∈[0, 1] on an interval [0, 1], and showed the almost sure convergence Later this result was extended to cover more general Gaussian processes in Baxter [4] and in Gladyshev [18] for uniformly divided partitions. General subdivisions were studied in Dudley [17] and Klein & Gine [24] where the optimal condition o( 1 log n ) for the mesh of the partition was obtained in order to obtain almost sure convergence. It is also known that for the standard Brownian motion the condition o( 1 log n ) is not only sufficient but also necessary. For details on this topic see De La Vega [16] for construction, and [27] for recent results. Functional central limit theorems for a general class of Gaussian processes were studied in Perrin [42]. More recently Kubilius and Melichov [25] defined a modified Gladyshev's estimator and the authors also studied the rate of convergence. Norvaiŝa [30] has extended Gladyshev's theorem to a more general class of Gaussian processes under the assumption of a uniform mesh. Finally, we can mention a paper by Malukas [29] who extended the results of Norvaiŝa to irregular partitions, and derived sufficient conditions for the mesh in order to obtain almost sure convergence.
The case of the fractional Brownian motion with Hurst index H ∈ (0, 1) was studied in details by Gyons and Leons [19]. The authors showed that an appropriately scaled first order quadratic variation (that is, the one based on differences X t k − X t k−1 ) converges to a Gaussian limit if H < 3 4 . In order to overcome the restiction H < 3 4 one can use generalisations of quadratic variation (including the one based on second order differences X t k − 2X t k−1 + X t k−2 ). In [9] these were applied to the identification problem. In [14] these were used to study singularity functions for fractional processes including the fractional Brownian sheet. Similarly, in [22] generalised quadratic variation was used to estimate the local Hölder index of a Gaussian process, and in [23] generalised quadratic variation was used to estimate the Hurst parameter of the fractional Brownian motion from discrete observations.
The most commonly used generalisation is second order quadratic variations based on differences X t k+1 − 2X t k + X t k−1 which was studied in detail in a series of papers by Begyn [5,7,6] with applications to the fractional Brownian sheet and time-space deformed fractional Brownian motion. In particular, in [5] the sufficient condition for almost sure convergence was studied with non-uniform partitions. The central limit theorem and its functional version were studied in [6] and [7] with respect to a standard uniformly divided partition.
The central limit theorem for a more general sequence f (X t ), where X t is a stationary Gaussian process, was proved in [12]. Later this result was refined in [48] to obtain functional convergence to the fractional Brownian motion or to the Rosenblatt process. For another generalisation, the localised quadratic variations were introduced in [8] in order to estimate the Hurst function of multifractional Brownian motion. These results have been generalised later in [13,26] to the identification problem of the multifractional Brownian motion and multifractional Levy motions, respectively.
The fractional Brownian motion has received a lot of attention in modelling as a (relatively) simple generalisation of a standard Brownian motion. However, for some applications the assumption of stationary increments is an unwanted feature. For this reason there is a need for extensions of fractional Brownian motion. Recent such generalisations are sub-fractional Brownian motion depending on one parameter H ∈ (0, 1) introduced by Bojdecki et al. [10], and bifractional Brownian motion depending on two parameters H ∈ (0, 1) and K ∈ (0, 1] (the case K = 1 corresponding to the fractional Brownian motion) introduced by Houdré and Villa [21], and later studied in more detail by Russo and Tudor [45]. Furthermore, bifractional Brownian motion was extended for values H ∈ (0, 1), K ∈ (1, 2) satisfying HK ∈ (0, 1) in [1].
There has also been recent interest in general Hermite variations which are partially motivated by the contributing paper by Breuer & Major [12]. Weighted Hermite variations with respect to the fractional Brownian motion was studied in [31] where both central and non-central limit theorems were proved. The error bounds in the case of non-central limit theorem was studied in detail in [35]. The critical case H = 1 4 for quadratic variation was studied in [11]. Finally, the integrals of fractional Brownian motion were studied in [15].
The fractional Brownian sheet, a two-dimensional extension of the fractional Brownian motion, has received attention at least in papers [43,44,40], where weighted quadratic variation of the fractional Brownian sheet, Hermite varia-tions of the fractional Brownian sheet, and functional limit theorems for Hermite variations were studied. Power variations of more general Gaussian processes with stationary increments were studied in [2]. Finally, we mention [39] where limit theorems for power variations of ambit fields driven by a white noise were derived.
This paper aims to take a practical, instructive, and a general approach to quadratic variations. Firstly, our aim is to provide easy to verify conditions for practitioners which still cover many interesting cases. Secondly, with our simplified approach we are able to provide intuitively clear explanations such as the discussion in Subsection 3.3 rather than get lost in technical details.
To obtain the generality we study sequences of general n-dimensional Gaus- may depend on n, and we study the asymptotic behaviour of the vector Y n or its quadratic variation defined as the limit provided it exists in some sense. As such, different cases such as first or second order quadratic variations can be obtained by choosing the vectors Y n suitably, and this fact will be illustrated in the present paper.
We begin by providing necessary and sufficient conditions for the convergence in probability which, applied to some quadratic functional of a given process, can be used to construct consistent estimators in the spirit of, e.g. [13,22]. Furthermore, we show that in this case the convergence holds also in L p for any p ≥ 1. We will also apply the well-known Gaussian concentration inequality for Hilbert-space valued Gaussian random variables which provides a simple condition that guarantees the almost sure convergence. This condition is applied to quadratic variations of Gaussian processes with non-uniform partitions for which we obtain sufficient conditions for the convergence. More importantly, this result is shown to hold for many cases of interest. In the particular case of standard Brownian motion, this condition corresponds to the known necessary and sufficient condition. Compared to the existing literature, in many of the mentioned studies the almost sure convergence is obtained by the use of the Hanson and Wright inequality [20] together with some technical computations. In this paper we show how these results follow easily from Gaussian concentration phenomena.
We will also study central limit theorems in our general setting. We begin by providing necessary and sufficient conditions under which appropriately scaled quadratic variation converges to a Gaussian limit. To obtain this result we apply a powerful fourth moment theorem proved by Nualart and Peccati [38] which, thanks to the recent results by Sottinen and the current author [47], can essentially be applied always. We will also show how a version of Lindeberg's central limit theorem for this case follows easily. Finally, we will use some well-known matrix norm relations to obtain a surprisingly simple way to obtain a convergence towards a normal random variable. More remarkably, it seems that this simple condition is essentially the one used in many studies. We will also provide a Berry-Esseen type bound that holds in our general setting which, to the best of our knowledge, is not present in the literature excluding some very special cases. Furthermore, our approach does not require knowledge of Malliavin calculus and should be applicable for anyone with some background on linear algebra and Gaussian vectors.
To summarise, in this paper we give necessary and sufficient conditions for the convergence of quadratic variations of general Gaussian vectors which can be used to reproduce and generalise existing results. Furthermore, we give easily verified sufficient conditions so that one can obtain the desired convergence results. As such, with our approach we are able to generalise the existing results as well as simplify the proofs considerably by relying on different techniques. The methods and results of this paper provide new tools to attack the problem under consideration while classically the problem is studied by relying on the Hanson and Wright inequality together with Lindeberg's central limit theorem.
The rest of the paper is organised as follows. In Section 2 we study general Gaussian vectors and provide our main results. In Section 3 we illustrate how our results can be used to study quadratic variations. We will consider non-uniform sequences and generalise some of the existing results. The main emphasis is on first order quadratic variations which is more closely related to stochastic calculus while we also illustrate how second order quadratic variations can be studied with our approach. We end Section 3 with a discussion on general quadratic variations. Finally, Section 4 is devoted to examples.

Notation and first results
Let Y n = (Y We consider properties of sequences Y n as n tends to infinity. Throughout the paper we will also use Landau notation, i.e. for a sequences a n and b n we denote; We also denote a n ∼ b n as n → ∞ if lim n→∞ an bn → 1. We begin with the following definition which is a discrete analogue of similar concepts introduced in [46]. provided the limit exists. 3. We say that Y has 2-planar variation Υ(Y ) defined as the limit provided the limit exists.
We will also define to be the centered quadratic variation.
This corresponds to the first order quadratic variation with respect to the uniform partition. Similarly, by setting Δ k X = X t n k+1 −2X t n k +X t n k−1 we obtain the second order quadratic variation with respect to the uniform partition.
Note that the norm · 2 also depends on the dimension n which we will omit in the notation. We will denote by Γ (n) the covariance matrix of the vector Y n , i.e. the n × n-matrix Note that with this notation the energy of a process Y is simply the limit of the trace of the matrix Γ, i.e. ε(Y ) = lim n→∞ trace(Γ n ). Similarly, Y n has finite quadratic variation if Y n 2 2 converges as n tends to infinity. Recall also that the Frobenius norm of a matrix Γ (n) = (Γ (n) ij ) i,j=1,...,n is given by Hence we have We will later show that in interesting cases we also have which, in view of (3), shows that the 2-planar variation Υ(Y ) is given by The following first result concerns convergence in probability. The proof follows essentially the arguments presented in [46] and is based on cumulant formulas for Gaussian random variables. The main difference is that we also prove the convergence in L p for any p ≥ 1 while in [46] the authors considered only L 2convergence. The proof follows the ideas presented in [46] but we will present the key points for the sake of completeness.
be a sequence of Gaussian vectors with finite energy. Then quadratic variation exists as a limit in probability if and only if the sequence (Y n ) ∞ n=1 has 2-planar variation. In this case, the convergence holds also in L p for any p ≥ 1.

Proof. Let
We start by showing that for any p ≥ 1 and the family {Z p n : n ≥ 1} is a uniformly integrable family. By Minkowski's inequality for measures together with the fact that Y for some constant C p . As this upper bound converges to ε(Y ), the family {Z p n : n ≥ 1} is uniformly integrable for any p ≥ 1. Now Hence we observe Consequently, from the fact that Y has finite energy. Now relation (5) implies the result. Indeed, assuming that Z n converges in probability, then uniform integrability implies Y ∈ L p and the convergence holds also in L p . Hence E(Z n Z m ) converges, and (5) implies that the 2-planar variation exists. Conversely, if the 2-planar variation exists, then (5) implies that E(Z n Z m ) converges to the same limit as E(Z 2 n ) which concludes the proof.

Remark 2.1.
It is straightforward to prove that the L p -convergence takes place also in the continuous setting of Russo and Vallois [46].

Remark 2.2.
Note that in order to obtain convergence in L p from the assumption that Z n converges in probability, we used Gaussianity of random variables Y k ) 2 from which uniform integrability follows. Hence the convergence in probability implies convergence in L p for any distribution of Y are random variables living in some fixed Wiener chaos (see subsection 2.3 for definition).
The following theorem gives conditions for when the quadratic variation is deterministic. It seems that this is indeed true in many cases of interest.
n=1 be a sequence of centred Gaussian vectors such that Y has finite energy. Then the quadratic variation exists as a limit in probability and is deterministic if and only if the 2-planar variation is zero. In this case Y is equal to the energy of the process and the convergence holds also in L p for any p ≥ 1. (5) with m = n. Finally, the convergence in L p follows directly from Theorem 2.1.

Remark 2.3. A generalisation of quadratic variation is α-variation which is defined as the limit lim
k | α converges in probability as n → ∞, then the convergence holds also in L p for any p ≥ 1. In this case however, the concept of 2-planar variation becomes much more complicated.

Almost sure convergence
In this subsection we address the question of when the convergence takes place almost surely. The key ingredient for our results is the concentration inequality for Gaussian measures. Using this inequality, we show that, rather surprisingly, the quadratic variation always converges to the energy of the process whether or not the energy is finite provided that 2-planar variation vanishes.
Before stating our results we recall the following concentration inequality taken from [3] for Gaussian processes. We present the result using our notation.
Remark 2.4. In addition to finite dimensional spaces, the result holds for any Hilbert-valued Gaussian random variables. Furthermore, even for Banach-valued Gaussian random variables one can use concentration inequality where (B, · B ) is a Banach space, X is a Banach-valued Gaussian random variable and σ 2 = sup L∈B : L ≤1 EL(X) 2 . Applying this to R n equipped with the norm · α together with the Riesz representation theorem one has where This can be used to obtain convergence of α-variations.
In order to obtain almost sure convergence of the quadratic variation, the idea is to find an upper bound on Γ (n) 2 , say, then the almost sure convergence follows immediately from the Borel-Cantelli lemma.
Then, as n → ∞, in probability and in L p for any p ≥ 1. Furthermore, the convergence holds almost surely for any sequences satisfying Γ (n) in probability follows immediately from Lemma 2.1, and the almost sure convergence follows by applying the Borel-Cantelli Lemma. Now the convergence (10) follows from the decomposition together with the uniform integrability condition (9) which also implies convergence in L p .
we have convergence in probability (or almost surely) while the energy might be infinite. For example, it can be shown that this is the case for the fractional Brownian motion B H with Hurst index H ∈ 0, 1 2 and its non-scaled quadratic variation, i.e. the one corresponding to a vector Y Note that for finite energy processes such that their 2-planar variations vanish this result shows that Consequently, Theorem 2.1 implies that 2-planar variation vanishes. This answers a question raised in [46,Remark 3.12] in our discrete setting. Similarly, one can use the general concentration inequality (7) to give an analogous answer in the continuous case.
Computation of the spectral norm Γ (n) 2 , or equivalently the largest eigenvalue, can be a challenging task. One way to overcome this challenge is to use the Frobenius norm · F which provides an upper bound and is easier to analyse. Unfortunately however, it provides quite rough estimates even in the simple case of standard Brownian motion as will be shown in subsection 4.1. A way to obtain general conditions is to use the matrix norm · 1 which is also the main approach applied in the literature. This is the topic of the next theorem. The proof is based on some well-known relations for matrix norms.
n=1 be a sequence of Gaussian vectors such that (9) holds. Furthermore, assume there exists a function φ(n) such that Then the convergence holds almost surely.
Proof. For arbitrary matrix A = (A ij ) i,j=1,...,n , recall the well-known bound for matrix norm A 2 given by Together with Theorem 2.2, this concludes the proof.
The following final result of this section is useful for stationary sequences. If φ(n) → 0 as n tends to infinity, then the convergence holds in probability and in L p . Furthermore, the convergence holds almost surely provided that φ(n) = o( 1 log n ). Proof. Now for any j ≥ 1 we have from which the result follows.

Central limit theorem
In this section we provide necessary and sufficient condition for the central limit theorem (CLT) to hold. More precisely, we present two simple results where the first one gives a version of Lindeberg's CLT for quadratic variations, and the second one (cf. Theorem 2.8) gives a simple condition which actually holds in most of the studies cited in the introduction. Our necessary and sufficient condition is based on the fourth moment theorem by Nualart and Peccati [38]. Hence we begin by recalling some basic facts on Wiener chaos. For more details we refer to the monographs [36,41,32].
Let X be a separable Gaussian centered process. We denote by H the Hilbert space spanned by the indicator functions 1 (0,t] and closed with respect to the inner product 1 (0,t] , 1 (0,s] H = R X (t, s), where R X is the covariance of X. Then one can define the first chaos H 1 of X as Gaussian random variables X(h), h ∈ H. Equivalently, the elements of H 1 can be defined as L 2 -closure of the random variables spanned by X t . Similarly, for q ≥ 1 fixed, the qth Wiener chaos of X, denoted by H q , is defined as the closed linear subspace of L 2 (Ω) generated by the family {H q (X(h)) : h ∈ H, h H = 1}, where H q is the qth Hermite polynomial. The mapping I X q (h ⊗q ) = H q (X(h)) can be extended to a linear isometry between the symmetric tensor product H q and the qth Wiener chaos H q , and for any h ∈ H q the random variable I X q (h) is called a multiple Wiener integral of order q. Remark 2.7. If X = W is a standard Brownian motion, then H is simply the space L 2 ([0, T ], dt). In this case the random variable I X q (h) coincides with the q-fold multiple Wiener-Itô integral of h (see [36]). Remark 2.8. Let X be a separable centered Gaussian process on an interval [0, T ]. It was proved in [47] that X admits a Fredholm integral representation where K ∈ L 2 ([0, T ] 2 ) and W is a Brownian motion, if and only if T 0 EX 2 u du < ∞. Furthermore, it was shown that this representation can be extended to a transfer principle which can be used to develop stochastic calculus with respect to X. In particular, the transfer principle can be used to define multiple Wiener integrals as multiple Wiener integrals with respect to a standard Brownian motion. This definition coincides with the one defined via Hermite polynomials.
Finally, we are ready to recall the following characterisation of convergence towards a Gaussian limit.
Theorem 2.5 ([38]). Let {F n } n≥1 be a sequence of random variables in the qth Wiener chaos, q ≥ 2, such that lim n→∞ E(F 2 n ) = σ 2 . Then, as n → ∞, the following asymptotic statements are equivalent: Remark 2.9. In this paper we are studying quadratic variations of Gaussian sequences. Hence, thanks to the Fredholm representation from [47], such objects can be viewed as sequences in the second chaos.
Remark 2.10. The case of the second chaos was studied in detail in [33,34] where the authors characterised all possible limiting laws. More precisely, in [33] it was proved that if a sequence in the second chaos converges in law to some random variable F , then F can be viewed as a sum of a normal random variable and an independent random variable belonging to the second chaos.
We are now ready to prove our results. We begin with the following auxiliary technical lemma whose proof is postponed to the appendix.

Lemma 2.2. For V n given by (1) we have
be a sequence of Gaussian vectors such that for every n ≥ 1 the elements Y (n) k , k = 1, . . . , n belong to the first Wiener chaos, and let Γ (n) denote the covariance matrix of Y (n) with eigenvalues λ n 1 , λ n 2 , . . . , λ n n . Then, as n tends to infinity, the following are equivalent.

1.
Proof. By assumption we are able to use the fourth moment Theorem 2.5 from which the equivalence of items (i) and (ii) follows with the help of Lemma 2.2.
To obtain equivalence of (i) and (iii), it is well-known that which concludes the proof.
As a simple corollary we obtain the following result which corresponds to Lindeberg's CLT and is mainly used in the references given in the introduction. and since V ar(V n ) = n k=1 (λ n k ) 2 , the result follows at once. Remark 2.12. Note that since Lindeberg's CLT can be proved without the theory of Wiener chaos, the above result is valid for arbitrary sequences of vectors Y n .
Finally, the following theorem justifies that in many cases it is sufficient to find an upper bound for λ * (n), or even for max 1≤j≤n ]|. While the proof follows from simple relations for matrix norms, the result turns out to be very useful for many practical applications. In particular, the following result covers many of the cases studied in the literature. Furthermore, in this case it easy to give a Berry-Esseen bound.

Theorem 2.8. Let the assumptions of Theorem 2.7 hold, and assume that Y n is a Gaussian vector with finite non-zero energy. Then there exists a constant
where Z is a standard normal random variable. Hence if Proof. Recall that the trace norm is given by Γ (n) * = n k=1 λ n k . By the Cauchy-Schwartz inequality we get the well-known matrix norm inequality By assumption, the vector Y n has finite non-zero energy. Hence by observing that lim n→∞ Γ (n) * = (Y ) > 0, we obtain that, for large enough n, we have for some constant c > 0. Next we observe that V ar(V n ) = Γ (n) F . Hence the fourth moment of is given by Hence the Berry-Esseen bound follows from Theorem 2.6.

Remark 2.13.
Note that the convergence towards normal random variable follow also from Corollary 2.1 which does not rely on the theory of Wiener chaos. However, for a sequence living in the second chaos we also obtain a Berry-Esseen bound.

Application to quadratic variations
In this section we apply the results to the quadratic variation of Gaussian processes. Throughout this section we consider arbitrary sequences of partitions where N (π n ) denotes the number of points in the partition. For notational simplicity, we drop the super-index n and simply use t k instead of t n k . The mesh of the partition is denoted by |π n | = max{t k − t k−1 , t k ∈ π n /{0}}. We also use m(π n ) = min{t k − t k−1 , t k ∈ π n /{0}}. Throughout this section we assume that for some function k. Obviously, usually the partition is chosen such that k(|π n |) ≤ C < ∞.

First order quadratic variations
In this subsection we study first order quadratic variation of Gaussian processes which is our main interest due to its connection to stochastic analysis. Throughout this subsection we also use the metric induced by the incremental variance of X, i.e.
Definition 3.1. Let X = (X t ) t∈[0,T ] be a centred Gaussian process. We say that X has first order φ-quadratic variation along π n if converges in probability as |π n | tends to zero.

Remark 3.1. A natural choice for the function φ is such that
In particular, in many interesting cases one has d X (t, s) ∼ r(t − s) as |t − s| → 0 for some function r, meaning that d X (t,s) x . To simplify the notation we denoteṼ 1 (π n , φ) = V 1 (π n , φ) − EV 1 (π n , φ). We also use Δt j = t j − t j−1 . We begin by giving the following general theorem which generalises the main results of [29] by allowing us to drop some technical assumptions. The result follows directly by uniting and rewriting Theorem 2.2 and Theorem 2.8 for the sequence Y . Theorem 3.1. Let X be a Gaussian process. Assume that for some function H(|π n |).

If H(|π n |) → 0 as |π n | tends to zero, then convergence
holds in probability. If H(|π n |) = o 1 log n , then convergence (15) holds almost surely. In these cases the convergence holds also in L p for any p ≥ 1 provided that (13) holds.

Furthermore, assume that
Then there exists a constant C > 0 such that

Remark 3.2.
In [29] the author studied a particular class of Gaussian processes while here we consider arbitrary Gaussian processes. Similarly, in [29] the main result was derived by using some technical computations under assumption (14) together with several additional technical assumptions. Here we have shown that (14) is the only needed assumption which generalises the class of processes considerably. Similarly, we have been able to simplify the proof since we have shown that such results follows essentially from Gaussian concentration together with some matrix algebra.
Next we will provide some sufficient conditions which are easy to verify. A particularly interesting case for us is Gaussian processes for which the function is C 1,1 (that is, continuously differentiable with respect to both variables) off the diagonal. Note that a sufficient condition for this is that the variance EX 2 t is C 1 and the covariance R of X is C 1,1 off the diagonal. Theorem 3.2. Let X be a continuous Gaussian process such that the function d(s, t) = E(X t − X s ) 2 is C 1,1 off the diagonal. Furthermore, assume that there exists a positive function f (s, t) such that

If there exists a function H(|π
then the result of Theorem 3.1 holds with function H(|π n |).

L. Viitasaari
Proof. For j = k we have giving us Here by Tonelli's theorem and assumptions. The claim follows at once.
However, in any naturally chosen sequence of partitions we have sup n≥1 k(π n ) < ∞ which guarantees sup 1≤k,j≤N (πn)−1 < ∞ for many functions φ. For example, this is obviously true for power functions φ(x) = x γ which is a typical choice in many cases.
As an immediate corollary we obtain the following which again seems to generalise the existing results in the literature. Most importantly, the following result shows how the lower bound for the variance plays a fundamental role. Furthermore, a standard assumption in the literature is that d(t, s) ∼ |t − s| γ for some number γ ∈ (0, 2) which in particular covers the case of fractional Brownian motion and related processes. In the following the structure of the variance can be more complicated. For simplicity we will only present the result in the case of bounded function k(π n ) while the general case follows similarly. Corollary 3.1. Let the notation and assumptions of Theorem 3.2 hold. Furthermore, assume that there exists a function r such that d(t, s) ∼ r(t − s) as 1. If |π n | = o( r(|π n |)), then |Ṽ 1 (π n , φ)| → 0 in probability and in L p for any p ≥ 1. The convergence holds almost surely for any sequence (|πn|) 2 r(|πn|) = o( 1 log n ).
Proof. The claim follows immediately from Theorem 3.2 by noting that our choice of function φ guarantees condition (16).
We end this section with the following result that recovers the case of the fractional Brownian motion and related processes. Note again that our technical assumptions are rather minimal.

Theorem 3.3. Let X be a continuous Gaussian process such that the function
for some η ∈ (0, 1), η = 1 2 and there exists a function H(|π n |) such that

Then the result of Theorem 3.1 holds with function H(|π n |).
Proof. Note that the case η > for some unimportant constants C which vary from line to line. Here we have used the fact that for positive numbers a, b and γ ∈ (0, 1) we have |a γ − b γ | ≤ |a − b| γ . Treating the integral tj−1 0 tj tj−1 |∂ st d(s, t)| dsdt similarly concludes the proof.

Remark 3.5.
It is straightforward to give a version of Corollary 3.1 in the case η = 1 2 as well.

Second order quadratic variations
In this subsection we briefly study second order quadratic variations. In particular, we reproduce and generalise the results presented in papers [5] and [6]. We will present our results in slightly different form. However, comparison is provided in Remark 3.6. Usually second order quadratic variation on [0, 1] is defined as the limit of n k=1 (X k+1 To generalise for irregular subdivisions Begyn [5] introduced and motivated a second order differences along a sequence π n as As in [5], we study the second order quadratic variation defined as the limit As before, we use short notatioñ We also assume that the derivative ∂ 4 ∂ 2 s∂ 2 t R(s, t) of the covariance function R of X exists off the diagonal and satisfies for some number γ ∈ (0, 2). Finally, we make the simplifying assumption sup n k(π n ) < ∞ on the function k. Hence it is also natural to assume In particular, the assumptions made in [5] implied in which case (18) is clearly satisfied. (17) and (18) holds, and that sup n k(π n ) < ∞.

There is a constant C > 0 such that
where Z is a standard normal random variable. In particular, if By (18) and boundedness of k(π n ), we have for some constant C. Furthermore, it was proved in [5] that boundedness of k(π n ) together with (17) yields for a constant C. This gives a bound for from which the result follows immediately by combining Theorems 2.2 and 2.8.

Remark 3.6.
To compare our result with the ones provided in papers [5] and [6], first note that we were able to reproduce and generalise the main theorem of [5] although we gave our result in a slightly different form. Indeed, in [5] several additional technical conditions were assumed to ensure the asymptotic relation (19) while here we have worked with general variance. This is helpful since the message of our result is that basically one has to only study the asymptotic behaviour of the variance, and (17) guarantees the upper bound for the corresponding matrix norm. Furthermore, the central limit theorem in [6] was proved only for uniformly divided partitions, and the central limit theorem was proved under more restrictive technical conditions, similar to those in [5], and by finding a lower bound for the variance in order to apply Lindeberg's CLT.
Here we have proved that such result holds also for non-uniform partitions and the result follows easily from the computations presented in [5] together with Theorem 2.8. Finally, here we also obtained a Berry-Esseen bound. In particular, under (19) we obtain a bound proportional to |π n |.

Remarks on generalised quadratic variations
We end this section by giving some remarks on generalised quadratic variations introduced by Istas and Lang [22]. Let a = (a 0 , a 1 , . . . , a p ) be a vector such that p k=0 a k = 0, where p is a fixed integer. Let also δ be a fixed small number and consider time points t k = kδ, k = 1, . . . , n. The a-differences of X are given by In [22] the authors considered stationary or stationary increment Gaussian processes and studied generalised a-variations defined as a limit of Since X is either stationary or has stationary increments, the function d(t, s) depends only on the difference |t − s|. The assumption in [22] was that the function v(t) = d(0, t) is 2D times differentiable (D is the greatest such integer, possibly 0), and for some number γ ∈ (0, 2) and some constant C > 0 we have where the remainder r satisfies r(t) = o (t γ ). The main results in [22] was that under certain assumptions one can obtain a Gaussian limit with some rate with a suitable choice of the vector a, although in some cases one has to assume the observation window nδ increases to infinity. Obviously, by using the result of this paper we could reproduce and generalise, at least to cover more general variances as in here for the first and second order quadratic variations, these results together with a much simplified proofs. Instead of getting lost into technical details we wish to give some remarks and explanations. In [22] the main message was, roughly speaking, that the larger the value of D and s, then the larger one has to choose the value p, i.e. taking account of more refined discretisations.
The idea is to try to find a discretisation vector a so that from which we obtain almost sure convergence and a central limit theorem (with V ar (V (a, n) − EV (a, n)) as normalisation so one is left to analyse the asymptotic of this variance). Hence it remains to study the choice of the vector a.
The idea for this becomes clearer from the first order variation and Corollary 3.1. Indeed, as also pointed out in [22], the number D is the order of differentiability of X in the L 2 -sense. If D ≥ 1, the variance must be at least of order (Δt j ) 2 so it is not possible to obtain even convergence in probability. The larger the D, the larger the value of p should also be. Similarly, as γ becomes close to 2 it roughly means that D becomes closer to 1 so once again one needs to refine the discretisation to obtain a Gaussian limit. More precisely, as γ comes closer to 2 we see immediately that the variance is no longer enough to compensate |π n | 3 2 in order to obtain a central limit theorem. Hence, in this case, one should consider second order quadratic variations.

Examples
This section is devoted to examples. We focus on reproducing some interesting and already studied examples to illustrate the power of our method rather than finding complicated new examples. On the other hand, our method can be easily applied to study more complicated examples which we will illustrate in subsection 4.5.
As particular examples, we study Brownian motion, fractional Brownian motion and related processes together with extensions sub-fractional Brownian motion and bifractional Brownian motion. Furthermore, we will focus on first order quadratic variation for which we find sufficient conditions for the mesh to obtain almost sure convergence. In this context a particularly interesting case for us is the bifractional Brownian motion for which we are able to improve the sufficient condition proved in [29]. Throughout the section, for the sake of simplicity, we assume that the function k(π n ) is bounded.

Standard Brownian motion
Let X = W be a standard Brownian motion. Then it is known that the almost sure convergence holds provided that |π n | = o( 1 log n ) (for recent results on the topic see [27]). Furthermore, this is sharp in the sense that one can construct a sequence with |π n | = O( 1 log n ) such that almost sure convergence does not hold. Now the sufficiency of |π n | = o( 1 log n ) follows easily from concentration inequality (6) applied to the increments of the Brownian motion. Indeed, in the case of standard Brownian motion the covariance matrix Γ (n) of the increments is diagonal, and we have Note also that if one uses the Frobenius norm Γ (n) F to obtain the upper bound, we have Γ (n) F = |π n | provided that |πn| m(πn) ≤ C. Hence we only obtain half of the best possible rate even in the case of standard Brownian motion. Finally, it is straightforward to obtain a central limit theorem which, of course, is well-known already.

Fractional Brownian motion
Recall that a fractional Brownian motion B H with Hurst index H ∈ (0, 1) is a continuous centred Gaussian process with covariance function The case H = 1 2 reduces to a standard Brownian motion. We obtain L pconvergence of general α-variations in a straightforward manner by using Remark 2.3.  We now turn to the convergence of quadratic variation which is more interesting for us. Now it is natural to take φ(x) = x 2H−1 , since for any partition of [0, T ] we have The following result is a direct consequence of Theorem 3.3.

There exists a constant C > 0 such that
where Z is a standard normal random variable. In particular, a central limit theorem holds for all values H < 3 4 . Remark 4.2. Note that, in the case H < 1 2 , we obtain similar sufficient conditions as in the case of a standard Brownian motion. Indeed, the only difference is that as here the increments are not independent, we have to pose an additional assumption sup n≥1 k(π n ) < ∞ to obtain the optimal condition o( 1 log n ). Remark 4.3. By considering uniform partitions it can be shown that by concentration inequalities one cannot obtain any better result. It would be interesting to know whether the given conditions are optimal similarly as in the case of a standard Brownian motion. However, for Brownian motion the counterexamples are constructed relying on independence of increments and, to the best of our knowledge, there exists no method to attack the problem for general Gaussian process.
Remark 4.4. The limit theorems for quadratic variations of fractional Brownian motion are extensively studied in the literature. However, most of the related studies rely on uniform partitions and focus on generalisations, e.g. to study Hermite variations or weighted variations rather than generalising the sequence of partitions. Furthermore, to recover the central limit theorem in the case H < 3 4 our approach is based only on simple linear algebra. For this reason our approach may be more applicable to generalise the results for arbitrary Gaussian processes while the obvious drawback is that it cannot provide a full Remark 4.6. We remark that the above result was already given in [29] with the same rates although there the condition for the case H = 1 2 was |π n || log |π n || = o( 1 log n ) which would follow from Remark 3.4. Obviously however, in this case we have standard Brownian motion so that |π n | = o( 1 log n ) is sufficient. Note also that in this case one cannot obtain better by using concentration inequalities. Indeed, this comes from the "fractional Brownian part" |t − s| 2H .

Bifractional Brownian motion
A particularly interesting case for us is the bifractional Brownian motion which was also studied in [29]. However, with our method we are able to improve the results of [29].

Remark 4.7.
Note that K = 1 corresponds to the ordinary fractional Brownian motion. It is straightforward to see that B H,K is HK-self-similar and Hölder continuous of any order γ < HK. For more details on the properties of bifractional Brownian motion, we refer to [45] and references therein.
While the main emphasis in [45] was integration via regularisation it was pointed out that one can prove the existence of α-variation in L 1 . Consequently, the following result is obvious from Remark 2.3.