Sample covariances of random-coefficient AR(1) panel model

The present paper obtains a complete description of the limit distributions of sample covariances in N x n panel data when N and n jointly increase, possibly at different rate. The panel is formed by N independent samples of length n from random-coefficient AR(1) process with the tail distribution function of the random coefficient regularly varying at the unit root with exponent $\beta$>0. We show that for $\beta$ $\in$ (0, 2) the sample covariances may display a variety of stable and non-stable limit behaviors with stability parameter depending on $\beta$ and the mutual increase rate of N and n.


Introduction
Dynamic panels providing information on a large population of heterogeneous individuals such as households, firms, etc. observed at regular time periods, are often described by simple autoregressive models with random parameters near unity.One of the simplest models for individual evolution is the random-coefficient AR(1) (RCAR(1)) process with standardized i.i.d.innovations {ε(t), t ∈ Z} and a random autoregressive coefficient a ∈ [0, 1) independent of {ε(t), t ∈ Z}.Granger [10] observed that in the case when the distribution of a is sufficiently dense near unity the stationary solution of RCAR(1) equation in (1.1) may have long memory in the sense that the sum of its lagged covariances diverges.To be more specific, assume that the random coefficient a ∈ [0, 1) has a density function of the following form where β > 0 and ψ(x), x ∈ [0, 1) is a bounded function with lim x↑1 ψ(x) =: ψ(1) > 0. Then for β > 1 the covariance function of stationary solution of RCAR(1) equation in (1.1) with standardized finite variance innovations decays as t −(β−1) , viz., .The same long memory property applies to the contemporaneous aggregate of N independent individual evolutions {X i (t)}, i = 1, . . ., N of (1.1) and the limit Gaussian aggregated process arising when N → ∞.Various properties of the RCAR(1) and more general RCAR equations were studied in Gonçalves and Gouriéroux [9], Zaffaroni [32], Celov et al. [3], Oppenheim and Viano [18], Puplinskaitė and Surgailis [25], Philippe et al. [19] and other works, see Leipus et al. [14] for review.
Statistical inference in the RCAR(1) model was discussed in several works.Leipus et al. [13], Celov et al. [4] discussed nonparametric estimation of the mixing density φ(x) using empirical covariances of the limit aggregated process.For panel RCAR(1) data, Robinson [29] and Beran et al. [1] discussed parametric estimation of the mixing density.In nonparametric context, Leipus et al. [15] studied estimation of the empirical d.f. of a from panel RCAR (1) observations and derived its asymptotic properties as N, n → ∞, while [16] discussed estimation of β in (1.2) and testing for long memory in the above panel model.For a N × n panel comprising N samples {X i (t), t = 1, . . ., n} of length n, i = 1, . . ., N of independent RCAR (1) processes in (1.1) with mixing distribution in (1.2), Pilipauskaitė and Surgailis [20] studied the asymptotic distribution of the sample mean as N, n → ∞, possibly at a different rate.[20] showed that for 0 < β < 2 the limit distribution of this statistic depends on whether N/n β → ∞ or N/n β → 0 in which cases XN,n is asymptotically stable with stability parameter depending on β and taking values in the interval (0, 2].See Table 2 below.As shown in [20], under the 'intermediate' scaling N/n β → c ∈ (0, ∞) the limit distribution of XN,n is more complicated and is given by a stochastic integral with respect to a certain Poisson random measure.The present paper discusses asymptotic distribution of sample covariances (covariance estimates) γ N,n (t, s) := 1 N n 1≤i,i+s≤N 1≤k,k+t≤n computed from a similar RCAR(1) panel {X i (t), t = 1, . . ., n, i = 1, . . ., N } as in [20], as N, n jointly increase, possibly at a different rate, and the lag (t, s) ∈ Z 2 is fixed, albeit arbitrary.Particularly, for (t, s) = (0, 0), (1.5) agrees with the sample variance: (1.6) The true covariance function γ(t, s) := EX i (k)X i+s (k + t) of the RCAR(1) panel model with mixing density in (1.2) exists when β > 1 and is given by where γ(t) defined in (1.3).Note that γ(t) cannot be recovered from a single realization of the nonergodic RCAR (1) process {X(t)} in (1.1).However, the covariance function in (1.7) can be consistently estimated from the RCAR(1) N × n panel when N, n → ∞, together with rates.The limit distribution of the sample covariance may exist even for 0 < β < 1 when the covariance itself is undefined.As it turns out, the limit distribution of γ N,n (t, s) depends on the mutual increase rate of N and n, and is also different for temporal, or iso-sectional lags (s = 0) and cross-sectional lags (s = 0).The distinctions between the cases s = 0 and s = 0 are due to the fact that, in the latter case, the statistic in (1.5) involves products X i (k)X i+s (k + t) of independent processes X i and X i+s , whereas in the former case, X i (k) and X i (k + t) are dependent r.v.s.
The main results of this paper are summarized in Table 1 below.Rigorous formulations are given in Sections 3 and 4. For better comparison, Table 2 presents the results of [20] about the sample mean in (1.4) for the same panel model.
Mutual increase rate of N, n Parameter region Limit distribution Table 1: Limit distribution of sample covariances γ N,n (t, s) in (1.5) Mutual increase rate of N, n Parameter region Limit distribution (ii) 'Intermediate Poisson' limits in Tables 1-2 refer to infinitely divisible distributions defined through certain stochastic integrals w.r.t.Poisson random measure.A similar terminology was used in [22].
(iii) It follows from our results (see Theorem 4.1 below) that a scaling transition similar as in the case of the sample mean [20] arises in the interval 0 < β < 2 for temporal sample covariances and product random fields X v (u)X v (u + t), (u, v) ∈ Z 2 involving temporal lags, with the critical rate N ∼ n β separating regimes with different limit distributions.For 'cross-sectional' product fields X v (u)X v+s (u + t), (u, v) ∈ Z 2 , s = 0 involving cross-sectional lags, a similar scaling transition occurs in the interval 0 < β < 3/2 with the critical rate N ∼ n 2β between different scaling regimes, see Theorem 3.1.The notion of scaling transition for longrange dependent random fields in Z 2 was discussed in Puplinskaitė and Surgailis [26], [27], Pilipauskaitė and Surgailis [22], [23].
(iv) The limit distributions of cross-sectional sample covariances in the missing intervals 0 < β < 1/2 and 0 < β < 3/4 of Table 1 b) are given in Corollary 3.1 below.They are more complicated and not included in Table 1 b) since the term N n( XN,n ) 2 due to the centering by the sample mean in (1.5) may play the dominating role.
(v) We expect that the asymptotic distribution of sample covariances in the RCAR(1) panel model with common innovations (see [21]) can be analyzed in a similar fashion.Due to the differences between the two models (the common and the idiosyncratic innovation cases), the asymptotic behavior of sample covariances might be quite different in these two cases.
(vi) The results in Table 1 a) are obtained under the finite 4th moment conditions on the innovations, see Theorems 4.1 and 4.2 below.Although the last condition does not guarantee the existence of the 4th moment of the RCAR(1) process, it is crucial for the limit results, including the CLT in the case β > 2. Scaling transition for sample variances of long-range dependent Gaussian and linear random fields on Z 2 with finite 4th moment was established in Pilipauskaitė and Surgailis [23].On the other side, Surgailis [31], Horváth and Kokoszka [12] obtained stable limits of sample variances and autocovariances for long memory moving averages with finite 2nd moment and infinite 4th moment.Finally, we mention the important works of Davis and Resnick [6] and Davis and Mikosch [5] on limit theory for sample covariance and correlation functions of moving averages and some nonlinear processes with infinite variance, respectively.
The rest of the paper is organized as follows.Section 2 presents some preliminary facts, including the definition and properties of the intermediate processes appearing in Table 1.Section 3 contains rigorous formulations and the proofs of the asymptotic results for cross-sectional sample covariances (1.5), s = 0 and the corresponding partial sums processes.Analogous results for temporal sample covariances and partial sums processes are presented in Section 4. Section 4 also contains some applications of these results to estimation of the autocovariance function γ(t) in (1.3) from panel data.Some auxiliary proofs are given in Appendix.

Preliminaries
This section contains some preliminary facts which will be used in the following sections.

The
where β > 0 is parameter and P B is the Wiener measure on C(R).Let d M β := dM β − dµ β be the centered Poisson random measure.We shall often use finiteness of the following integrals: be a family of stationary Ornstein-Uhlenbeck (O-U) processes subordinated to be a family of integrated products of independent O-U processes indexed by x 1 , x 2 > 0. We use the representation of (2.11) where and µ β (L 1 ) < ∞.For 1/2 < β < 3/2 the two integrals in (2.13) can be combined in a single one: These and other properties of Z β are stated in the following proposition whose proof is given in the Appendix.We also refer to [28] and [20] for general properties of stochastic integrals w.r.t.Poisson random measure.
β has a.s.continuous trajectories.
(iv) (Asymptotic self-similarity) For any where V + β , V * β are a completely asymmetric β-stable r.v.s with ch.f.s Ee iθV + β = exp{ψ(1) 1) processes.We use some facts in Proposition 2.4, below, about conditional variance of the partial sums process of the product

Conditional long-run variance of products of RCAR(
. For i = j we assume additionally that Eε 4 i (0) < ∞.
Proposition 2.4.We have where with cum 4 being the 4th cumulant of ε i (0).Moreover, for any n ≥ 1, i, j ∈ Z, a i , a j ∈ [0, 1) where Hence and from (2.30) we obtain 3 Asymptotic distribution of cross-sectional sample covariances Theorems 3.1 and 3.2 discuss the asymptotic distribution of partial sums process where t and s ∈ Z, s = 0 are fixed and N and n tend to infinity, possibly at a different rate.The asymptotic behavior of sample covariances γ N,n (t, s) is discussed in Corollary 3.1.As it turns out, these limit distributions do not depend on t, s which is due to the fact that the sectional processes {X i (t), t ∈ Z}, i ∈ Z are independent and stationary.
Theorem 3.1.Let the mixing distribution satisfy condition where the limit processes are the same as in (2.18), ( (ii) Let λ ∞ = 0 and E|ε(0)| 2p < ∞ for some p > 1.Then where the limit process is the same as in (2.20).
where Z β is the intermediate process in (2.13).
Remark 3.1.Our proof of Theorem 3.1 (ii) requires establishing the asymptotic normality of a bilinear form in i.i.d.r.v.s, which has a non-zero diagonal, see the r.h.s. of (3.52).For this purpose, we use the martingale CLT and impose an additional condition of E|ε(0)| 2p < ∞, p > 1.To establish the CLT for quadratic forms with non-zero diagonal, [2] took similar approach and also needed 2p finite moments.In Theorem 3.2 we also assume E|ε(0)| 2p < ∞, p > 1.However, it can be proved under Eε 2 (0) < ∞ applying another technique that is approximation by m-dependent r.v.s.Moreover, this result holds if (1.2) is replaced by EA 12 < ∞.
Note that the asymptotic distribution of sample covariances γ N,n (t, s) in (1.5) coincides with that of the statistics γ N,n (t, s) For s = 0 the limit behavior of the first term on the r.h.s. of (3.8) can be obtained from Theorems 3.1 and 3.2.
It turns out that for some values of β, the second term on the r.h.s.can play the dominating role.The limit behavior of XN,n was identified in [20] and is given in the following proposition, with some simplifications.
From Theorems 3.1 and Proposition 3.1 we see that the r.h.s. of (3.8) may exhibit two 'bifurcation points' of the limit behavior, viz., as N ∼ n 2β and N ∼ n β .Depending on the value of β the first or the second term may dominate, and the limit behavior of γ N,n (t, s) gets more complicated.The following corollary provides this limit without detailing the 'intermediate' situations and also with exception of some particular values of β where both terms on the r.h.s. may contribute to the limit.Essentially, the corollary follows by comparing the normalizations in Theorems 3.1 and Proposition 3.1.
where Z ∼ N (0, 1) and σ ∞ is the same as in Theorem 3.1 (i).
The proof of Theorem 3.1 in cases (i)-(iii) is given subsections 3.1-3.3.To avoid excessive notation, the discussion is limited to the case (t, s) = (0, 1) or the partial sums process S N,n (τ Later on we shall extend them to general case (t, s), s = 0.
Let us give an outline of the proof of Theorem 3.1.Similarly to [20] we use the method of characteristic function combined with 'vertical' Bernstein's blocks, due to the fact that S N,n is not a sum of row-independent summands as in [20].Write S N,n (τ ) = S N,n;q (τ ) + S † N,n;q (τ ) + S ‡ N,n;q (τ ), (3.17)where the main term is a sum of Ñq 'large' blocks of size q − 1 with The convergence rate of q ∈ N in (3.19) will be slow enough (e.g., q = O(log N )) and specified later on.The two other terms in the decomposition (3.17), (3.20) contain respectively Ñq = o(N ) and N − q Ñq < q = o(N ) row sums and will be shown to be negligible.More precisely, we show that in each case (i)-(iii) of Theorem 3.1, where A N,n and S β denote the normalization and the limit process, respectively, particularly, Note that the summands Y k,n;q , 1 ≤ k ≤ Ñq in (3.18) are independent and identically distributed, and the limit S β (τ ) is infinitely divisible in cases (i)-(iii) of Theorem 3.1.Hence use of characteristic functions to prove (3.21) is natural.The proofs are limited to one-dimensional convergence at a given τ > 0 since the convergence of general finite-dimensional distributions follows in a similar way.Accordingly, the proof of (3.21) for fixed τ > 0 reduces to where To prove (3.24) write We use the identity: where the sum |D|≥2 is taken over all subsets D ⊂ {1, . . ., q − 1} of cardinality |D| ≥ 2. Applying (3.27) with w i = e iθy i (τ ) − 1 we obtain Φ N,n;q (θ) := Ñq (q − 1) Ee iθy 1 (τ ) − 1 + Ñq |D|≥2 E i∈D e iθy i (τ ) − 1 .
(3.28) Thus, since Ñq (q − 1)/N → 1, (3.24) follows from Let us explain the main idea of the proof of (3.29).Assuming 2) the l.h.s. of (3.29) can be written as where and B N,n → ∞ is a scaling factor of the autoregressive coefficient.In cases (ii) and (iii) of Theorem 3.1 (proof of (3.5) and (3.6)) we choose this scaling factor = 1 and prove that the integral in (3.31) converges to , where z(τ ; x 1 , x 2 ) is a random process and Φ(θ) is the required limit in (3.24).A similar scaling )) although in this case the factor N/B 2β N,n = 1/ log λ N,n in front of the integral in (3.31) does not trivialize and the proof of the limit in (3.24) is more delicate.On the other hand, in the case of the Gaussian limit (3.3), the choice (2.11) as shown in subsection 3.3 below.
To summarize the above discussion: in each case (i)-(iii) of Theorem 3.1, to prove the limit (3.21) of the main term, it suffices to verify relations (3.29) and (3.30).The proof of the first relation in (3.22) is very similar to (3.21) since S † N,n;q (τ ) is also a sum of i.i.d.r.v.s and the argument of (3.21) applies with small changes.The proof of the second relation in (3.22) seems even simpler.In the proofs we repeatedly use the following inequalities: Proof of (3.29).For notational brevity, we assume λ N,n = λ ∞ = 1 since the general case as in (3.2) requires unsubstantial changes.Recall from (2.16) that Φ(θ) = , where z(τ ; x 1 , x 2 ) is the double Itô-Wiener integral in (2.11).Also recall the representation (3.31), (3.32),where for any fixed x 1 , x 2 ∈ R + follows from L 2 -convergence of the kernels: where point-wise for any x i > 0, s i ∈ R, s i = 0, i = 1, 2, τ > 0 fixed.We also use the dominating bound with C > 0 independent of s i , x i , i = 1, 2 which follows from the definition of h n (•; τ ; x 1 , x 2 ) and the inequality 1 It remains to show the convergence of the corresponding integrals, viz., From (3.31) and Ez N,n (τ ; where Proof of (3.30).Choose q = q N,n = log n .Let J q (θ) denote the l.h.s. of (3.30).Using the identity D⊂{1,...,q−1}:|D|≥2 i∈D w i = 1≤i<j<q w i w j i<k<j (1 + w k ) with w i = e iθy i (τ ) − 1, see (3.27), we can rewrite J q (θ) = 1≤i<j<q T ij (θ), where with C, δ > 0 independent of n.Using E[y i (τ )|a k , ε j (k), k, j ∈ Z, j = i] = 0 and (3.41) we obtain where Similarly, .
Proof of (3.29).Note the log-ch.f. of the r.h.s. in (3.5) can be written as with σ 0 > 0 given by the integral (3.48) Relation (3.47) follows by change of variable x i → (θ 2 τ /4) (3.49) Let us prove the (conditional) CLT: implying the point-wise convergence of the integrands in (3.31) and (3.48), for any fixed (x 1 , x 2 ) ∈ R 2 + .As in the rest of the paper, we restrict the proof of (3.50) to one-dimensional convergence, and set τ = 1 for concreteness.Split (3.49) as z N,n (1; Arguing as in the proof of (2.29) it is easy to show that where λ N,n → 0, implying the first relation in (3.52).To prove the second relation in (3.52) we use the martingale CLT in Hall and Heyde [11].(The same approach is used to prove CLT for quadratic forms in [2].)Towards this aim, write z + N,n (x 1 , x 2 ) as a sum of zero-mean square-integrable martingale difference array with respect to the filtration F k generated by {ε i (s), 1 Accordingly, the second convergence in (3.52) follows from and where is a direct consequence of the asymptotics in (2.27), where a i = 1−x 1 /N 1/(2β) , a j = 1−x 2 /N 1/(2β) .Therefore the first relation in (3.53) follows from (3.54) and To show (3.55) we split R n = R n + R n into the sum of 'diagonal' and 'off-diagonal' parts, viz., where Using the elementary bound for 1 ≤ s 1 , s 2 ≤ n: By (3.56), for 1 < p < 2 and x 1 , x 2 > 0 fixed  Now we return to the proof of (3.29), whose both sides are written as respective integrals (3.31) and (3.47).Due to the convergence of the integrands (see (3.51)), it suffices to justify the passage to the limit using a dominated convergence theorem argument.The dominating function independent of N, n is obtained from (3.31) and Ez N,n (τ ; x 1 , x 2 ) = 0 and from (3.40), (3.41), (2.8) similarly as in the case λ ∞ ∈ (0, ∞) above.This proves (3.29).

Proof of
. Rewrite the l.h.s. of (3.29) as and where zN,n (τ ; x 1 , x 2 ) is defined as in (3.32) with A N,n replaced by ÃN,n := n 2 = A N,n /λ β N,n .As shown in the proof of Case (iii) (the 'intermediate limit'), for any see (3.41), and integrability of Ḡ, see (2.9).
Proof of (3.30) is similar to that in case (iii) 0 < λ ∞ < ∞ above with q = log n .It suffices to check the bound (3.43) for T ij (θ) = T ij (θ) + T ij (θ) given in (3.42).By the same argument as in (3.44), we obtain The bound on Ez 2 N,n (τ ; x 1 , x 2 ) in (3.62) further implies where and ] can be handled in the same way.Whence, the bound in (3.43) follows with any 0 < δ < 3 − 2β, for 1 < β < 3/2.This proves (3.30).Proof of (3.22) using Ñq /N → 0 and L q = N − q Ñq < q = o(N ) is completely analogous to that in case (iii) 0 < λ ∞ < ∞.This completes the proof of Theorem 3.1, case (i) for 1 < β < 3/2.Case 0 < β < 1. Proof of (3.29).In the rest of this proof, write λ ≡ λ N,n = N 1/(2β) /n → ∞ for brevity.Also denote λ := λ(log λ) Split the r.h.s. of (3.29) as follows: Here, L 1 is the main term and L i , i = 2, 3 are remainders.Indeed, where, by change of variables: since 0 < β < 1.Similarly, This proves Consider the main term L 1 .Although Ee iθz N,n (τ ;x 1 ,x 2 ) and hence the integrand in L 1 point-wise converge for any (x 1 , x 2 ) ∈ R 2 + , see below, this fact is not very useful since the contribution to the limit of L 1 from bounded x i 's is negligible due to the presence of the factor 1/ log λ → 0 in front of this integral.It turns out that the main (non-negligible) contribution to this integral comes from unbounded x 1 , x 2 with To see this, by change of variables y = x 1 + x 2 , x 1 = yw and then w = z/y 2 we rewrite where In view of L i = o(1), i = 2, 3 relation (3.29) follows from representation (3.66) and the existence of the limit: where the constant k ∞ > 0 is defined below in (3.71).More precisely, (3.68) says that for any > 0 there exists K > 0 such that for any N, n, y To prove (3.69), rewrite V (θ) of (3.68) as the integral with Z 1 , Z 2 ∼ N (0, 1) independent normals and then point-wise for any τ > 0, z > 0, s i ∈ R, s i = 0, i = 1, 2 fixed.Moreover, under the same conditions (3.75), h Proof of (3.30).For T ij (θ) defined by (3.42) let us prove (3.43).Denote N λ := (N log λ) 1/2β .Similarly to (3.44) we have that with z N,n (τ ; x 1 , x 2 ) defined by (3.63).Whence using (3.64) similarly as in the proof of case (i) we obtain where proving (3.43) with any 0 < δ < β.This proves (3.30).We omit the proof of (3.22) which is completely similar to that in case (iii) and elsewhere.This completes the proof of Theorem 3.1 for (t, s) = (0, 1).
Proof of Theorem 3.1 in the general case (t, s) ∈ Z 2 , s ≥ 1. Similarly to (3.17) we decompose S t,s N,n (τ ) in (3.1) as S t,s N,n (τ ) = S t,s N,n;q (τ ) + S t,s; † N,n;q (τ ) + S t,s; ‡ N,n;q (τ ), where the main term is a sum of independent Ñq = N/q blocks of size q − s = q N,n − s → ∞, and are remainder terms.The proof of (3.29)-(3.30)for is completely analogous since the distribution of y t,s i (τ ) does not depend on t and s = 0.

Proof of Theorem 3.2
The proof uses the following result of [23].
be a triangular array of m-dependent r.v.s with zero mean and finite variance.Assume that: (L1) For notational simplicity, we consider only one-dimensional convergence at , identically distributed random variables with zero mean and finite variance.Since , and so Eξ 2 n ∼ τ σ 2 , where , where A 12 is independent of B(τ ).This follows from the martingale CLT similarly to (3.50).By the lemma above, we conclude that (N n) −1/2 S t,s N n (τ ) → d σB(τ ).Theorem 3.2 is proved.
4 Asymptotic distribution of temporal (iso-sectional) sample covariances The limit distribution of iso-sectional sample covariances γ N,n (t, 0) in (1.5) and the corresponding partial sums process S t,0 N,n (τ ) of (3.1) is obtained similarly as in the cross-sectional case, with certain differences which are discussed below.Since the conditional expectation E where S t,0 N,n (τ ) := S t,0 N,n (τ ) − T t,0 N,n (τ ) is the conditionally centered term with E[ S t,0 N,n (τ )|a 1 , • • • , a N ] = 0, and is proportional to a sum of i.i.d.r.v.s a t i /(1 − a 2 i ), 1 ≤ i ≤ N with regularly decaying tail distribution function 2).Accordingly, the limit distribution of appropriately normalized and centered term T t,0 N,n (τ ) does not depend on t and can be found from the classical CLT and turns out to be a (β ∧ 2)-stable line, under normalization nN 1/(β∧2) (β = 2).The other term, S t,0 N,n (τ ), in (4.1), is a sum of mutually independent partial sums processes Y t,0 i,n (τ ) : The proof of the last fact follows similarly to that of (2.28) and is omitted.As a i ↑ 1, A t,0 ii ∼ 1/(2(1−a i ) 3 ) and the limit distribution of S t,0 N,n (τ ) can be shown to exhibit a trichotomy on the interval 0 < β < 3 depending on the limit λ * ∞ in (4.3).It turns out that for β > 2 the asymptotically Gaussian term T t,0 N,n (τ ) dominates S t,0 N,n (τ ) in all cases of λ * ∞ , while in the interval 0 < β < 2 T t,0 N,n (τ ) and S t,0 N,n (τ ) have the same convergence rate.Somewhat surprisingly, the limit distribution of S t,0 N,n (τ ) is a β-stable line in both cases λ * ∞ = ∞ and λ * ∞ = 0 with different scale parameters of the random slope coefficient of this line.Rigorous description of the above limit results is given in the following Theorems 4.1 and 4.2.The proofs of these theorems are similar and actually simpler than the corresponding Theorems 3.1 and 3.2 dealing with non-horizontal sample covariances, due to the fact that S t,0 N,n (τ ) is a sum of row-independent summands contrary to S t,s N,n (τ ), s = 0.Because of this, we omit some details of the proof of Theorems 4.1 and 4.2.We also omit the more delicate cases β = 1 and β = 2 where the limit results may require a change of normalization or additional centering.
where V * β is a completely asymmetric β-stable r.v. with characteristic function in (4.7) below.
Proof of (4.12), case 0 < λ * ∞ < ∞.We have where the last expectation is taken w.r.t. the Wiener measure P B .Similarly as in the proof of (3.29) we prove the point-wise convergence of the integrands in (4.14) and (4.16): for any x > 0 The proof of (4.17) using Proposition 2.1 is very similar to that of (3.35) and we omit the details.Using (4.17) and the dominated convergence theorem we can prove the convergence of integrals, or (4.12).The application of the dominated convergence theorem is guaranteed by the dominating bound 4.18) follows.For 1 < β < 2 (4.18) follows similarly.This proves (4.12) for 0 < λ * ∞ < ∞.
Proof of (4.12), case λ * ∞ = 0.In this case for any x > 0 similarly as in (4.17).Finally, the use of the dominating bound in (4.18) which is also valid in this case completes the proof of (4.12) for λ * ∞ = 0.
(iii) Follows from part (ii) by Kolmogorov's criterion, similarly as in the proof of Proposition 2.2.
We have log E exp{iθb Using cov[X i (k)X i (k + t), X i (k )X i (k + t)|a i ] = a and the same bound as in (2.29) we see that the l.h.s. of (A.21) does not exceed CE[ 1 (1−a i ) 2 min{1, 1 n(1−a i ) }] which vanishes as n → ∞ by the dominated convergence theorem, due to E(1 − a) −2 < ∞.
.61) Relation (3.29) follows from (3.59), (3.61) and the dominated convergence theorem, using the dominating bound dy y and use (3.69) together with the fact that |V N,n (θ; y)| ≤ C is bounded uniformly in N, n, y.

Table 2 :
Limit distribution of the sample mean XN,n in(1.4) Remark 1.1.(i)β-stablelimits in Table1a) arising when N/n β → 0 and N/n β → ∞ have different scale parameters and hence the limit distribution of temporal sample covariances is different in the two cases.