Model veriﬁcation for L´evy-driven Ornstein-Uhlenbeck processes

: L´evy-Driven Ornstein-Uhlenbeck (or CAR(1)) processes were introduced by Barndorﬀ-Nielsen and Shephard [ 1] as a model for stochastic volatility. Pham [17] developed a general formula to recover the un- observed driving process from the continuously observed CAR(1) process. When the CAR(1) process is observed at discrete times 0, h , 2 h , ··· [ T/h ] h the driving process must be approximated. Approximated increments of the driving process are used to test the assumption that the CAR(1) process is L´evy-driven. Asymptotic behavior of the test statistic is investigated. Performance of the test is illustrated through simulation.


Introduction
Univariate continuous-time autoregressive moving average (CARMA) processes are the continuous time analogue of the widely employed discrete-time ARMA process. CARMA(p, q) processes are the solutions of linear stochastic differential equations of the form They were introduced in [11] in a Gaussian setting and generalized in [3] to include Lévy driving processes. Further extensions include multivariate CARMA or fractionally integrated CARMA models (see e.g. [15] and [7]).
As evidenced above, the probabilistic properties of CARMA processes have received considerable attention. However, there has been little development in statistical inference for such models. This paper takes one of the first steps in this direction, and complements recent work by Brockwell and Schlemm [8]. As mentioned in [8], if one decides to model a continuous time process using the CARMA framework, three main problems arise: a) the choice of the orders p and q; b) estimation of the model coefficients a i and b j ; c) choosing an appropriate model for the Lévy driving process.
Most papers in the extant literature that deal with CARMA models assume that their order is known, and indeed in this paper we will focus on the CAR(1) model driven by a second order Lévy process. In particular, there are several papers that discuss estimation of the coefficients, especially for CAR(1) models; see e.g. [18] and the references therein. In [10], it is the asymptotic behaviour of the sample mean and covariances of the process Y that is of interest. Assuming that the driving process is fractional Lévy, these results can be applied to estimating the Hurst parameter of L. Jongbloed et al. [14] consider the Lévy-driven CAR(1) model in a semi-parametric framework. In [14], the goal is estimation of the marginal distribution function of Y (0) and the density of the Lévy measure; the method utilizes the Markovian structure of the CAR(1) process and its mixing properties. In [8] the authors address the third issue, namely estimation of the parameters of a specified family of Lévy processes, assuming that the order and coefficients of the model are known. In each of the preceding references, it is assumed that the continuous time process Y is observed only at discrete times, which is the usual situation in applications.
However, before one selects a parametric family of Lévy processes (i.e., (c)) and/or estimates the model coefficients (i.e., (b)), one should verify whether it is reasonable to assume that the driving process is Lévy. From the point of view of exploratory data analysis, the first step would be to plot the sample covariances of the driving process L at various lags. This procedure assumes a priori that the underlying driving process has a finite second moment. However, the driving process is unobservable and cannot be directly recovered if the observed CARMA process Y is sampled at discrete times. Thus, as in [8], the driving process can only be estimated and inference must be performed with noisy data. Hence, in this paper our main goal is to perform statistical inference on the sample covariances of the (approximately) recovered driving process, assuming that the second moment is finite.
To be precise, in this paper we focus on the second-order CAR(1) model only, that is we study the unique solution Y of the stochastic differential equation We go beyond exploratory data analysis by providing a formal test of the hypothesis that the driving process L has uncorrelated increments, based on a discrete sample from the process Y . En route, we prove several results of independent interest, including finding a precise bound on the approximation error of the unit increments of the recovered process, as well as providing an elementary proof of a central limit theorem for the integrated CAR(1) process.
We proceed as follows. To perform statistical analysis on the unobserved driving process, in Section 2 we first use the inversion formula of Pham [17], that represents the unobserved driving process in terms of the continuously observed CAR(1) process. The same strategy was employed in [6] and in a multivariate setting in [8], Theorem 4.3. Since the CAR(1) process is observed at discrete times, as noted above the driving process cannot be recovered exactly and a trapezoidal approximation is used to replace an unobservable integral.
In Section 3, we are able to provide a uniform bound on the approximation error in the unit increments of the recovered driving process (see Lemmas 3.8 and 3.10 as well as Theorem 3.4). Our Lemma 3.8 can be compared to Theorem 5.7 in [8]. Although the result in the latter paper holds for more general multivariate CARMA models, our bound is more precise and is uniform with respect to N , the length of time that the process is observed. As a consequence, we can derive central limit theorems for partial sums and sample covariances (see Theorem 3.4 together with Corollary 3.6, and Theorem 3.14).
We note in passing that a different strategy to recover the unobserved driving process was employed in [12]. Without assuming any particular values of p and q for the general CARMA model, increments of the Lévy driving process are estimated via renormalized recovered noise from the Wold representation of the sampled CARMA sequence. The recovered process is shown to be L 2 -consistent under the assumption that the CARMA process is invertible.
In Section 4, as an important by-product, we prove a central limit theorem for the integrated CAR(1) process. The significance of this result is that the proof is quite elementary and does not require any mixing arguments.
We return to the problem of verifying the assumption that the driving process is Lévy in Section 5. We propose an appropriate test statistic to test the hypothesis that the driving process has uncorrelated increments and prove that it is asymptotically N (0, 1) under the null hypothesis. Several simulation studies illustrate the behaviour of the statistic under both the hypothesis and alternative.

Future work and open problems
There are a number of issues that go beyond the scope of this paper. First, in future work, we will extend our results to functionals acting on the recovered increments. Such results are needed to establish limit theorems for empirical processes or nonparametric density estimators. These in turn are tools to carry out more precise tests of goodness-of-fit once it has been concluded that the driving process is Lévy -for example, to test whether the driving process can be modelled as a gamma process. This approach should be contrasted with that taken by Jacod and Protter in [13], where the functional acts on increments of the observed process Y .
Next, our results rely on the assumption that the parameter a is known. (Without loss of generality, the parameter σ can be assumed to be one since it can be incorporated into a reparametrization of the Lévy noise.) It is natural to ask if our results are valid if we replace a with an estimator. This is currently under investigation.
Another question that arises is whether our results can be extended to general CARMA(p, q) or at least CARMA(p, 1) models. A close inspection of our proofs show that they rely on the inversion formula and the second order properties of Y . Hence, our results should be extendable, but this will require a detailed analysis.
Last, but not least, it would be desirable to extend some of our results to CAR(1) models driven by a stable Lévy process. Clearly, the techniques developed here are not appropriate since they rely on an L 2 approximation of the noise L using the observed process Y .
Examples of second-order Lévy processes include Brownian motion with drift, the Poisson process, and the gamma process, characterized by

Lévy-driven CAR(1) models
In what follows, we assume that the process L is càdlàg with stationary increments.
Definition 2.3 (CAR(1) process). A CAR(1) process Y = {Y (t), t ≥ 0} driven by the process L = {L(t), t ≥ 0} is defined to be the solution of the stochastic differential equation where a, σ ∈ R + and Y (0) is independent of {L(t), t ≥ 0}. We call the process L the driving process, and if L is a Lévy process then Y is called a Lévy-driven CAR(1) process.
The unique solution (c.f. [16], Section 17) for equation (2.2) can be written as The function f (u) = e −a(t−u) is deterministic and continuously differentiable. Using an integration by parts formula we can define the CAR(1) process pathwise as: The process Y is strictly stationary if L is a second-order Lévy process, a > 0, and Y (0) is independent of L and is equal in distribution to σ ∞ 0 e −au dL(u). Lemma 2.4 ([1]). Let Y be a strictly stationary CAR(1) process driven by a second-order Lévy process L such that (2.1) holds. Then for s ≥ 0,

The sampled process
In practice, continuous time processes are usually sampled at discrete times.
In what follows, we will use the notation Y (t) when the time parameter t is continuous, and Y t when the time parameter t is discrete.
Here we assume that the CAR(1) process is observed at equally spaced intervals of length h. To be precise, let Y be a strictly stationary CAR(1) process. For 0 ≤ s < t we have: For h > 0 and n ∈ Z + choose t = nh and s = (

The sampled process {Y
(h) n , n = 0, 1, 2, . . .} can be written as: n , n = 0, 1, 2, . . . , Now assume that L is a Lévy process. Since L has stationary and independent increments, {Z (h) n , n ≥ 1} is an iid sequence. Hence, the sampled process {Y (h) n , n = 0, 1, 2, . . .} is a discrete-time AR(1) process. If L is a second-order Lévy process then we can represent the noise Z Consequently, using Lemma 2.4 and stationarity of the process Y we have as follows: Hence, (2.8)

Recovering the driving process
If the CAR(1) process Y is continuously observed on [0, T ] then the following result from [17] (see also cf. [5]) provides an inversion formula that represents L in terms of Y .

Theorem 2.5 (Inversion Formula). Let Y be a CAR(1) process. Then
Note that Theorem 2.5 does not require the assumption that L is Lévy.

Approximation of increments using the inversion formula
When the CAR(1) process Y is sampled discretely, the driving process L cannot be recovered exactly via the inversion formula (2.9). Instead, it is necessary to approximate the increments of L over the sampling intervals. Using the inversion formula (2.9), the increment of L over the interval ((n − 1)h, nh], h > 0, n = 1, . . . , N , is: The above increment requires the continuously observed process Y . If the process is observed at discrete times nh, then in [5] the authors replace the integral by a trapezoidal approximation: , n = 1, 2, . . . , N. (2.11) We will refer to the above equation as the estimated increments. They have the following properties.
Proposition 2.6. Let Y be a strictly stationary CAR(1) process driven by a second-order Lévy process L, E[L(t)] = µt, Var(L(t)) = η 2 t. Then, The proof is given in the Appendix, Section A.1. We make several comments: • We note that although L has independent increments, the non-zero covariance in (ii) appears due to the discretization error introduced by the trapezoidal approximation.
Hence, as expected the discretization error disappears when h → 0. However, the rate of convergence to zero is the same for the covariances (s > 0) and the variance (s = 0). For this reason, it does not seem possible to test for 0 correlations of the increments of L using estimates of An alternative approach is discussed in Section 3.2.2.

Asymptotics for the sampled process
In this section, we consider the asymptotic properties of the sample characteristics of the recovered driving process. We can write the estimated increments (cf. (2.11)) as:

Discrete approximation of the driving process
can be written as a partial sum:

Sample mean and covariances
We define ∆ L (h) , the sample mean of the estimated increments ∆ L (h)

be the sample correlations of the estimated increments.
Three discretization scenarios We will consider three discrete sampling scenarios when investigating the asymptotic behaviour of the sample mean ∆ L (h) and sample covariances Case (I): h is fixed, T = N h, and N → ∞ As we noticed in Proposition 2.6, the discretization with a fixed frequency h introduces an error leading to estimated increments with non-zero covariance. Nevertheless, one can still obtain some relevant limiting results that can be used for estimation of the model parameters.
We recall from equation (2.5) that the sampled process Y (nh) can be viewed as a discrete-time AR(1) process. This fact will be exploited to provide simple, direct proofs of both Propositions 3.1 and 3.3 following; the proofs may be found in Appendix A.2.1 and A.2.2, respectively. Alternatively, observing that our model is a special case of that of Cohen and Lindner [10], their Theorems 2.1 and 3.5 for the sample mean and covariances of the discretely sampled process Y could also be applied here.

Proposition 3.1. Consider a strictly stationary Lévy-driven CAR(1) process Y . Then
Remark 3.2. We indicate how this result can be used in statistical inference. When h > 0 is known, the central limit theorem for the sample mean allows us to construct confidence intervals for µ based on the recovered Lévy process. Nuisance parameters η, a appear only in the variance of the limiting normal distribution and in principle bootstrapping avoids the need to estimate these parameters. On the other hand, if one attempts to construct a confidence interval for µ based on the process Y , then Lemma 2.4 indicates a non-identifiability issue.
We state a result for the sample covariances as well. However, from an applied point of view this result is not particularly useful, since the theoretical covariances of the estimated increments do not vanish due to the discretization error.

Proposition 3.3. Consider a strictly stationary Lévy
Case (II) and Case (III): h = 1/M , and M → ∞ We start with some notation that will be used in Case (II) and (III). Here, we assume that h = 1/M . As in equation ( Convergence of partial sums In case (I) we analyze the behaviour of the partial sum L (h) (N h) by representing it in terms of an AR(1) model (see Appendix A.2.1, proof of Proposition 3.1). Here, we take a different route. We bound the difference between the estimated Lévy process and the true process L using the bound given in Theorem 3.4 below. This result is applicable in Case (II) and Case (III) since the sampling error will converge to zero. This approach could not be used in the previous case, due to the sampling error coming from a fixed h.
A result similar to Theorem 3.4 can be found in [12] (Theorem 3.2) for a zero mean Lévy driving process L in a general CARMA process framework. However, their recovery strategy is different. In particular, their analogue to L is as defined in equation (2.6). Hence, if we use these definitions in our notation, L which is different than (3.5). Theorem 3.2 of [12] yields L 2 convergence of their approximation under an assumption of invertibility. In the next theorem, we are able to provide a precise L 2 bound on the error of our approximation (3.5). This bound is key to the results that follow.
Consequently, the bound converges to 0 as N → ∞ and N/M → 0.
The proof of Theorem 3.4 relies on the following uniform bound on the difference between integrals of Y and the corresponding discretely observed process.
The bound converges to 0 as N → ∞ and N/M → 0.
Proof. In what follows we will also use the following notation. Let by the Cauchy-Buniakowski-Schwarz inequality. We have This finishes the proof.
Using the bound from Lemma 3.5 we have Since, as N → ∞, N −1/2 (L(N ) − N µ) converges to a normal distribution with variance η 2 , the following corollary is immediate in Case (III).

Estimated unit increments
To proceed with the sample covariances, we look at finer properties of the estimated increments over unit intervals. Recall now notation (2.10). In analogy we define We note that the latter notation indicates the increments over interval (n−1, n], when the sampling frequency is M , as opposed to (2.10) where the increment over ((n − 1)h, nh] is considered. Using the inversion formula we have (3.10) That is, the estimated increment over (n − 1, n] is represented as the sum of estimated increments over the small intervals ((n − 1), (n − 1) + 1 M ], . . . .

Properties of the estimated increments
We prove some properties of the estimated increments ∆ 1 L (M) n , n ≥ 1 defined in (3.8). Remark 3.7. Since Y is a strictly stationary process with a finite second moment, then it can be easily shown using (3.10) that ∆ 1 L (M) n , n ≥ 1, is a secondorder stationary sequence.
Next, we show how closely the estimated increments ∆ 1 L (M) n approximate the true increments ∆ 1 L n , n ≥ 1. Note that Lemmas 3.8 and 3.9 mimic Theorem 3.4 and Lemma 3.5, but allow a finer analysis of the discretization error.
In order to prove Lemma 3.8 we start with the following uniform approximation. The proof is given in Appendix A.2.3.
Proof of Lemma 3.8. (i) Using equations (3.10) and (3.9) we compute the dif- By taking the L 2 norm and using Lemma 3.9 we get: Using Lemma 2.4 the result follows.
(ii) The result is a consequence of (i), since The following approximation result can be viewed as a corollary of Lemma 3.8. The proof is given in Appendix A.2.4.
The bound converges to 0 as M → ∞, uniformly in n and k.
Remark 3.11. We note that the bounds in Lemma 3.8 (i) and Lemma 3.10 are independent of N , so that convergence is uniform in N . As a consequence, we can use the above estimates both in Case (II) and (III).
Proof. Parts (ii) and (iii) are immediate consequences of Lemmas 3.8(ii) and 3.10, respectively. As for (i), we have using equation (3.10) and Lemma 2.4,

Asymptotics for the sample mean
The next result is a simple corollary of Theorem 3.4 and Corollary 3.6. We note that the law of large numbers requires N ∧ M → ∞ only, while the central limit theorem needs N → ∞ and N/M → 0. Then, Proof of (i). Note that By Theorem 3.4, Let N ∧ M → ∞ and (i) follows.
Proof of (ii). This is an immediate consequence of Corollary 3.6.
Proof of (iii). We have consequently the result follows by (i).

Asymptotics for the sample covariances
In this section we consider the sample covariances of the estimated increments: Again, we note that the sample covariance here has a different meaning than the one in (3.3). There, "lag k" means that we look at dependence between ∆ L (h) n and ∆ L (h) n+k (n ≥ 1), that is the estimated increment over ((n − 1)h, nh] and the estimated increment over ((n + k − 1)h, (n + k)h]. In the present setting we look at dependence between the estimated increments over ((n − 1), n] and ((n+k−1), (n+k)], when the CAR(1) process is sampled at frequency h = 1/M .

Theorem 3.14. Consider scenario (III). Let Y be a strictly stationary CAR(1) process driven by a second-order Lévy process L such that (2.1) holds. Then
The proof is given in Appendix A.2.5. We proceed with two remarks.  (ii) Since L is a Lévy process, it has stationary independent increments. If X n = ∆L n+k ∆L n , k ≥ 1, n ≥ 1, then X n is a strictly stationary k−dependent sequence with mean zero (assuming µ = 0) and autocovariance function γ X (n) = η 4 if n = 0 and zero otherwise. Using Theorem 6.4.2 in [4] we have:

CLT for the Lévy-driven CAR(1) process
This section is a brief digression from the central topic of our paper. Here we use our estimator of the driving process L to prove a central limit theorem for the integrated CAR(1) process Y . The significance of this result is that the proof is quite elementary and does not require any mixing arguments. In Corollary 3.6, we have proven a CLT for the estimated process L (M) N by showing its closeness to the true Lévy process L(N ). In this section, a similar approach leads to a CLT for the integrated process N 0 Y (s)ds; see Theorem 4.2. We note that results similar to those in Corollary 4.1 and Theorem 4.2 can be obtained using different methods. In [18], mixing properties of the CAR(1) model are utilized, while sample means and sample covariances for the discretely sampled process Y are considered in [10]. In our Scenario I (fixed h) asymptotic normality of the sample mean and sample covariances are proven under appropriate moment assumptions.

Corollary 4.1. Consider scenario (III). Let Y be a strictly stationary CAR(1) process driven by a second-order Lévy process L such that (2.1) holds. Then
Proof. From (3.10) we have Therefore, since Y is stationary, (i) follows by Corollary 3.13 (i) and (ii) follows by Corollary 3.6.
In fact, Corollary 4.1 gives us a very simple proof of the CLT for the integrated Lévy-driven CAR(1) process Y : Let Y be a strictly stationary CAR(1) process driven by a second-order Lévy process L such that (2.1) holds. Then Proof. If we center equation (3.6), we have, for every N Arguing as in Theorem 25.4 of [2], for any y ′ < x < y ′′ , with y ′′ − x < ǫ, x − y ′ < ǫ, for all N, M . Therefore, letting N → ∞ and choosing M such that N/M → 0, by Corollary 4.1, where W is normal with mean zero and variance η 2 σ 2 /a 2 . Since ǫ is arbitrary, the result follows.

Test statistics
If Y is a CAR(1) model driven by a process L we can use the estimated increments to test H 0 that L has uncorrelated increments, which will be true if L is a Lévy process. We reject H 0 for a large absolute value of the statistic W ∆1 L (M ) (k) for a specified value of k, where W ∆1 L (M ) (k) is defined as follows: Proof. The result follows by Theorem 3.14 and Slutsky's theorem since by Lemma 3.13, η 2 / η 2 p −→ 1.
Under H 0 , for large N, M and N M small we have

Simulation study
In this section we investigate the behavior of our test statistic W ∆1 L (M ) (1) under both H 0 and H 1 . Further, we see that the error introduced by discrete sampling at high frequency becomes very small by comparing W ∆1 L (M ) (1) calculated with both the true and recovered values of the driving process.

Brownian motion driven CAR(1) process
For a large K, we simulate an iid sequence (noise) Z i K ∼ N (0, 1 K ), i = 1, 2, . . . , N K. We approximate the driving process B(t) on [0, N ] as, In order to simulate Y , Brownian motion-driven CAR(1) process, we look at its definition through its stochastic differential equation which by Euler's scheme can be approximated by the difference equation: Using the simulated noise Z i K we have, we compute the estimates of the recovered increments ∆ 1 B In Figure 1 below, we compare the estimated increments with the true increments ∆ 1 B n = B n − B n−1 .
To show that ∆ 1 B n and ∆ 1 B (M) n are not identical, we display their differences in Figure 2.
In Figure 3, we compute the sample autocorrelation function for both the recovered increments (∆ 1 B (M) n ) and the true increments (∆ 1 B n ). The sample autocorrelations are very similar for the true and estimated increments, in agreement with equation (A.11). Figure 3 also reflects the 0 correlation of the true increments and the asymptotic 0 correlation of the estimated increments. This is in agreement with Theorem 3.14 and Remark 3.15. We want to test at level 0.05    To assess the performance of our proposed test statistic based on estimated increments (see Lemma 5.1), we compare it with the corresponding statistic based on the true increments; that is, we compare the performance of the statistics:  Tables 1 and 2 give the empirical levels α ∆1B , α ∆1 B (M ) for both tests based on W ∆1B (1) and W ∆1 B (M ) (1) respectively, over R = 400 simulations, with nominal level 0.05. We consider various values of the parameters a and σ.
These results are consistent with a nominal level 0.05, since with R = 400, the empirical level should fall in the range 0.05 ± 0.021 95% of the time.
The test statistics based on the true and recovered increments give us virtually identical empirical levels except for large values of a (a = 100 or a = 1000). Table 2 We fix {a = 0.9, K = 5000, µ = 0, R = 400} N = 50, M = 100 N = 100, M = 100 N = 100, M = 300 N = 100, M = 500 Looking at the formula for the recovered noise (3.10), we see that large values of a introduce more variability and so this result is to be expected. For a Gaussian driving process, the performance of the test statistic does not seem to be particularly sensitive to the sampling frequency M or the value of the ratio N/M . Also, a sample N = 50 seems adequate.

Gamma-driven CAR(1) process
Following the same steps as in the case of Brownian motion driven CAR(1) process we simulate the driving process G(t) using the discrete approximation We also use Euler's scheme to approximate Y , a gamma-driven CAR(1) process, by the discrete equation: Figure 4 shows a sample path Y t with K = 5000 and N = 100 and Y Sampled which are the values of Y t observed at the times {0, 1 M , 2 M , . . . , N } with M = 500. In Figure 5 we display ∆ 1 G n = G n − G n−1 and ∆ 1 G (M) n computed by equation (3.10), using Y Sampled from Figure 4. Also, the differences (∆ 1 G n − ∆ 1 G (M) n ) are displayed in Figure 6 as well as the sample autocorrelation functions in Figure 7. As before, the sample autocorrelations support our theoretical results. Tables 3 and 4 give computed empirical levels α ∆1G , α ∆1 G (M ) for tests based on W ∆1Gn (1) and W ∆1 G (M ) (1) respectively, following the same procedures as before, over R = 400 simulations, with nominal level 0.05. We consider various values of the parameters a and σ as well.
We note larger discrepancies between the empirical levels α ∆1G and α ∆1 G (M ) for small M than we did for Brownian motion. This is likely due to the larger discrepancies between the true and recovered increments as illustrated in Figure 8, for M = 100. As a result, in the case of a gamma process driving function, the tests are more sensitive to the sampling frequency M and to the sample size N . However, the ratio N/M does not appear to play a significant role.
In the left hand plot in Figure 8, we see the true-recovered increments with M = 100. There are four large peaks that obscure the remaining differences. Removing these peaks allows us to illustrate the remaining differences in the right hand plot.

An alternative case: Fractional Brownian motion-driven CAR(1) process
As an example of a second-order non-Lévy processes we consider a fractional Brownian motion process.  Hence B H , H = 1 2 , is a second-order process (EB 2 H (1) < ∞) which has stationary dependent increments. Table 3 We fix {σ = 1, µ = 1, η = 1, K = 5000, R = 400} N = 50, M = 100 N = 100, M = 100 N = 100, M = 300 N = 100, M = 500  To illustrate the behavior of the test statistics W ∆1L (1) and W ∆1 L (M ) (1) under an alternative case, we simulate fractional Brownian motion B H (t) with Hurst parameter H by using the techniques developed in [9]. Following the same procedures as before we approximate the process Y t (B H -driven CAR(1)) and Y Sampled . The sample autocorrelation functions are displayed in Figure 9 where we can see that the positive correlation at lag 1 is reflected by both the recovered ) and the true increments (∆ 1 B H,n ) with H = 0.8. We considered the power functions for both tests at level 0.05 based on W ∆1BH (1) and W ∆1 BH (M ) (1). We computed the empirical rejection rate β ∆1BH , β ∆1 BH (M ) for different values of the Hurst parameter H; these values are given in Table 5. The power functions are illustrated in Figure 10.
Using Lemma 2.4 we obtain This completes the proof of (ii), and (iii) is an immediate consequence.
n , n ≥ 1, and Lemma 2.4 we have The result follows since the second term in the equation (A.1) converges to zero in probability as N → ∞.
We proceed with the proof of (ii). Since Y (h) n is a strictly stationary AR(1) process (c.f. (2.5)), it can be written as (cf. [4]) Now, by equation (2.7), Note that (cf. (2.8)): Using Theorem 7.1.2 in [4] we have: Hence, The second term (multiplied by √ N ) in (A.1) converges to zero in probability as N → ∞, and so by Slutsky's Theorem the result follows.

A.2.2. Proof of Proposition 3.3
(i): We decompose the sample covariances in the standard way: We use formula (3.1) to simplify the first term of equation (A.4): Using ergodicity of Y (h) n , n ≥ 1, we have A similar argument can be applied to the other three terms in (A.5). Hence, as N → ∞, Putting everything together finishes the proof of (i).
(ii): Using equation (A.2) we can rewrite equation (3.1) in the following form:  Indeed by using equation (A.9) we have, The bound converges to 0 since N/M → 0. This completes the proof of the claim (A.10). Now, we are ready to finish the proof of (ii). Using Remark 3.15, (ii) we have that: