Asymptotic variance of stationary reversible and normal Markov processes

We obtain necessary and sufficient conditions for the regular variation of the variance of partial sums of functionals of discrete and continuous-time stationary Markov processes with normal transition operators. We also construct a class of Metropolis-Hastings algorithms which satisfy a central limit theorem and invariance principle when the variance is not linear in $n$.


Introduction
Let (ξ n ) n∈Z be a stationary Markov chain defined on a probability space (Ω, F, P) with values in a general state space (S, A) and let the marginal distribution be denoted by π(A) = P(ξ 0 ∈ A). We assume that there is a regular conditional distribution denoted by Q(x, A) = P(ξ 1 ∈ A| ξ 0 = x). Let Q also denote the Markov transition operator acting via (Qg)(x) = S g(s)Q(x, ds), on L 2 0 (π), the set of measurable functions on S such that S g 2 (s)π(ds) < ∞ and S g(s)π(ds) = 0. If g, h ∈ L 2 0 (π), the integral S g(s)h(s)π(ds) will sometimes be denoted by g, h . For some function g ∈ L 2 0 (π), let X i = g(ξ i ), S n (X) = n i=1 X i , σ n (g) = (ES 2 n (X)) 1/2 .
Denote by F k the σ-field generated by ξ i with i ≤ k. For any integrable random variable X we denote by E k X = E(X|F k ). With this notation, E 0 X 1 = Qg(ξ 0 ) = E(X 1 |ξ 0 ). We denote by X p the norm in L p (Ω, F, P).
The Markov chain is called normal when the transition operator Q is normal, that is it commutes with its adjoint Q * , namely QQ * = Q * Q.
From the spectral theory of normal operators on Hilbert spaces (see for instance [32]), it is well known that for every g ∈ L 2 0 (π) there is a unique transition spectral measure ν supported on the spectrum of the operator D := {z ∈ C : |z| ≤ 1}, such that cov(X 0 , X n ) = cov((g(ξ 0 ), Q n g(ξ 0 )) = g, Q n g = D z n ν(dz). ( and cov(E 0 (X i ), E 0 (X j )) = Q i g, Q j g = g, Q i (Q * ) j g = D z izj ν(dz).
In particular, the Markov chain is reversible if Q = Q * . The condition of reversibility is equivalent to requiring that (ξ 0 ,ξ 1 ) and (ξ 1 , ξ 0 ) have the same distribution. Furthermore, in the reversible case ν is concentrated on [−1, 1].
Limit theorems for additive functionals of reversible Markov chains have received considerable attention in the literature not only for their intrinsic interest, but also for their great array of applications which range from interacting particle systems (see the seminal paper by Kipnis and Varadhan [20]) and random walks in random environments (see for example [35]), to the relatively recent applications in computational statistics with the advent of Markov Chain Monte Carlo algorithms (e.g. [17,31]). Limit theorems have appeared under a great array of conditions, notably geometric ergodicity (for an overview see [22]), conditions on the growth of the conditional expectations E(S n |X 1 ) (see e.g. [25], [28]), under spectral conditions (see [20,14,9]) or under conditions on the resolvent of the transition operator (see [35]), a method which is also applicable in the non-normal case where spectral calculus may not apply.
The variance of the partial sums plays a major role in limit theorems where it acts as a normalizer, and also in computational statistics where the asymptotic variance is used as a measure of the efficiency of an algorithm (see e.g. [34]). It is not surprising then, that in certain cases, conditions for the central limit theorem have been imposed directly on the growth of the variance. In fact in 1986 Kipnis and Varadhan [20] proved the functional form of the central limit theorem for functionals of stationary reversible ergodic Markov chains under the assumption that and further established necessary and sufficient conditions for the variance of the partial sums to behave linearly in n in terms of the transition spectral measure ν. In particular they showed that for any reversible ergodic Markov chain the convergence in (3) is sufficient for the functional central limit theorem S [nt] / √ n ⇒ |σ g |W (t) (where W (t) is the standard Brownian motion, ⇒ denotes weak convergence and [x] denotes the integer part of x). Moreover, (3) is equivalent to the fact that the finite limiting variance is then given by Furthermore, according to Remark 4 on page 514 in [9] if in addition to (4) we assume ρ(−1) = 0 then, we also have lim n→∞ n i=0 cov(X 0 , X i ) = See also [17] for a discussion of when (3) and (4) are equivalent. It is remarkable that in the reversible case, conditions (3) and (4) are equivalent, both sufficient for the central limit theorem and invariance principle, and conjectured to be sufficient for the almost sure conditional central limit theorem.It is an open problem, whether (4) is also necessary for the central limit theorem.
On the other hand, notice that any invertible transformation T generates a unitary, and thus normal, transition operator Qf (x) = f (T (x)), since Q * f (x) = f (T −1 (x)) whence QQ * = Q * Q = I is the identity operator. In particular, any stationary sequence ξ i , can be treated as a functional of a normal Markov chain. Therefore for normal, non-reversible Markov chains, (3) and the central limit theorem and invariance principle are no longer equivalent without further assumptions (see e.g. Bradley [4] and Giraudo and Volný [13] for counterexamples).
For the non-reversible case, Gordin and Lifšic [14] applied martingale methods and stated, among other results, the central limit theorem for functionals of stationary ergodic Markov chains with normal transition operator, under the spectral condition If condition (5) holds then (3) also holds with One of our main results, Theorem 6, gives necessary and sufficient conditions for the existence of the limit var(S n )/n → K < ∞. We shall see that var(S n )/n → K if and only if σ 2 < ∞ and In this case K = σ 2 + πC. Furthermore if (5) holds then C = 0.
Recently Zhao et al. [37] and Longla et al. [24], in the context of reversible Markov chains, studied the asymptotic behavior of S n for the more general case when the variance of partial sums behaves as a regularly varying function σ 2 n = var(S n ) = nh(n) where h(x) is slowly varying, i.e. h : (0, ∞) → (0, ∞), continuous, and h(st)/h(t) → 1 for all s > 0. For this case the situation is different and in [37,24] examples are given of stationary, reversible, and ergodic Markov chains that satisfy the CLT under a normalization different of σ n , namely S n /σ n ⇒ N (0, c 2 ) for a c = 1 and c = 0.
On the other hand, in a recent paper, Deligiannidis and Utev [7] have studied the relationship between the variance of the partial sums of weakly stationary processes and the spectral measure induced by the unitary shift operator. To be more precise, by the Birghoff-Herglotz Theorem (see e.g. Brockwell and Davis [5]), there exists a unique measure on the unit circle, or equivalently a non-decreasing function F, called the spectral distribution function on [0, 2π], such that If F is absolutely continuous with respect to the normalized Lebesgue measure λ on [0, 2π], then the Radon-Nikodym derivative f of F with respect to the Lebesgue measure is called the spectral density; in other words F (dθ) = f (θ)dθ,. The main result of [7] is given below. In the sequel, the notation a n ∼ b n as n → ∞ means that lim n→∞ a n /b n = 1.
Theorem A. [Deligiannidis and Utev [7]] Let S n := X 1 + · · · + X n where (X i ) i∈Z is a real weakly stationary sequence. For α ∈ (0, 2), define C(α) := Γ(1+α) sin( απ 2 )/[π(2−α)], and let h be slowly varying at infinity. Then var(S n ) ∼ n α h(n) as n → ∞ if and only if F (x) In this paper we obtain necessary and sufficient conditions for the regular variation of the variance of partial sums of functionals of stationary Markov chains with normal operators. The necessary and sufficient conditions are based on several different representations in terms of: 1. the spectral distribution function in the sense of the Birghoff-Herglotz theorem, 2. the transition spectral measure of the associated transition operator, 3. the harmonic measure of Brownian motion in the disk, 4. a martingale decomposition.
In the case of stationary reversible Markov Chains we also construct a class of Metropolis-Hastings algorithms with non-linear growth of variance, for which we establish the invariance principle and conditional central limit theorem with normalization nh(n).
Continuous-time processes. In the continuous time setting, let {ξ t } t≥0 be a stationary Markov process with values in the general state space (S, A), defined on a probability space (Ω, F, P), with stationary measure π. We assume that the contraction semigroup is strongly continuous on L 2 (π), and we let {F t } t≥0 be a filtration on (Ω, F, P) with respect to which {ξ t } t is progressively measurable and satisfies the Markov property E(g(ξ t )|F u ) = T t−u g(ξ u ), for any g ∈ L 2 (π) and 0 ≤ u < t. Furthermore we can write T t = e Lt , where L is the infinitesimal generator of the process {ξ t } t , and D(L) its domain in L 2 (π). We assume T t to be normal, that is T * t = T t , which then implies that L is a normal, possibly unbounded operator, with spectrum supported in the left half-plane {z ∈ C : ℜ(z) ≤ 0} (see [32,Theorem 13.38]). In the reversible case the spectrum of L is supported on the left real half-axis(see [20,Remark 1.7]).
Similarly to the discrete case, with any f ∈ L 2 (µ) we can associate a unique spectral measure ν(dz) = ν f (dz) supported on the spectrum of L such that e zt ν(dz).
In the reversible case Kipnis and Varadhan [20] proved an invariance principle under the condition that f ∈ D (−L) −1/2 , which in spectral form is equivalent to Building on the techniques in [14,20], Holzmann [18,19] established the central limit theorem for processes with normal transition semi-groups (see also [27]), under the condition In this case On the other hand, using resolvent calculus, Toth [36,35] treated general discrete and continuoustime Markov processes and obtained a martingale approximation, central limit theorem and convergence of finite-dimensional distributions to those of Brownian motion, under conditions on the resolvent operator which may hold even in the non-normal case. Similar conditions, albeit in the normal case, also appeared later in [18,19]. Under any of the above conditions, it is clear that the variance of S T is asymptotically linear in T . Similarly to the discrete case, we show in Theorem 15, that var(S n )/n → K = ς 2 + πC if and only if ς 2 < ∞ and ν(U x )/x → C, where The rest of the paper is structured as follows. We provide our results for discrete time processes in Section 2 and for continuous time in Section 3. Section 4 contains the proofs, while the Appendix contains two standard Tauberian theorems to make the text self-contained, and technical lemmas used in Section 3.
2 Results for Markov chains 2.1 Relation between the transition spectral measure and spectral distribution function Our first result gives a representation of the spectral distribution function in terms of the transition spectral measure. This link makes possible to use the results in [7] to analyze the variance of partial sums. Quite remarkably, if the transition spectral measure is supported on the open unit disk, the spectral distribution function is absolutely continuous with spectral density given by (10), and in this case the sequence cov(X 0 , X n ) converges to 0.
Lemma 1 (Representation Lemma). Let (ξ n ) n∈Z be a stationary Markov chain, with normal transition operator Q. Let g ∈ L 2 0 (π), X i := g(ξ i ) and write ν = ν g for the operator spectral measure with respect to g. Also denote the unit circle Γ := {z : |z| = 1} and by D 0 := {z : |z| < 1}. Denote by ν Γ the restriction of measure ν to Γ and by ν 0 denote the restriction of measure ν to D 0 . Then Furthermore the spectral distribution function has the representation Remark 2. By integrating relation (11) we obtain where By combining Representation Lemma 1 with Theorem A we obtain the following corollary.
It should be obvious from the statement of Representation Lemma 1 that Theorem A is directly applicable to the measure dF . The conditions on F, mentioned in Corollary 3, when expressed in terms of the operator spectral measure, become technical conditions on the growth of integrals of the Poisson kernel over the unit disk. To get further insight into this lemma we shall apply it to reversible Markov chains.
Our next result, a corollary of Representation Lemma 1 combined with Theorem A, provides this link and points out a set of equivalent conditions for regular variation of the variance for reversible Markov chains. In fact, as it turns out, if the spectral measure has no atoms at ±1, it follows that the spectral distribution function is absolutely continuous and we obtain an expression for the spectral density. Related ideas, under more restrictive assumptions have appeared in [11], while in [9] a spectral density representation was obtained for positive selfadjoint transition operators, in other words ν supported on [0, 1).

Corollary 4.
Assume that Q is self-adjoint and that the transition spectral measure ν does not have atoms at ±1. Then, the spectral distribution function F defined by (7) is absolutely continuous with spectral density given by and for α ∈ [1, 2), the following are equivalent:

Relation between spectral measure and planar Brownian motion
Our next result makes essential use of the Poisson kernel which appears in (10) to provide a fascinating interpretation of the spectral distribution function F in terms of the harmonic measure of planar Brownian motion started at a random point in the open unit disk.
Theorem 5. Let ν be the transition spectral measure, and let (B z t ) t≥0 be standard planar Brownian motion in C, started at the point z ∈ D. Also let Z be a random point in D distributed according to ν and let τ Z D := inf{t≥0 : Let Γ x := {z : z = e ity , |y| < x} and α ∈ (0, 2). Then, the following statements are equivalent:

Linear growth of variance for partial sums for normal Markov Chains
By applying martingale techniques we establish necessary and sufficient conditions for the asymptotic linear variance behavior for general normal Markov chains. Theorem 6. With the notation of Lemma 1 (a) The limit, var(S n )/n → K < ∞ exists if and only if where K = σ 2 + L .
(b) Moreover, under (14) the following are equivalent: It should be noted that there are many sufficient conditions for the convergence (1) It is immediate from the proof of Theorem 6, that (16) is equivalent to σ 2 < ∞ and (2) In Corollary 7.1 in [3] it was shown that if we assume then both (14) and (17) are satisfied and so convergence (16) holds, the result attributed to Gordin and Lifšic [14] (see Theorem 7.1 in [3]; see also [9]).
(3) From Representation Lemma 1 and Ibragimov version of Hardy-Littlewood theorem (4) On the other hand, from the Representation Lemma 1 and Theorem A, convergence (16) is equivalent to the uniform integrability of T x (z) with respect to ν 0 as x → 0 + . (5) Motivated by the complex Darboux-Wiener-Tauberian approach (e.g. as in [8]), by analyzing a sufficient condition for (16) is Here, since ν(dz) = ν(dz), the integral is understood in the Cauchy sense i.e.
(6) From Theorem 6, it follows that (16) is equivalent to σ 2 < ∞ and ν(U x )/x → 0 as x → 0 + . (7) Finally, again from Theorem 6, it follows that (16) is equivalent to σ 2 < ∞ and nν(D n ) → 0 as n → ∞. This result is also a corollary of the following inequality motivated by Cuny and Lin [6] nν(D n ) 36 Remark 7. Notice that when the transition spectral measure ν is concentrated on Γ (dynamical system), then σ 2 = 0 and so σ 2 cannot be the limiting variance, in general.
By inspecting the proof of the Theorem 6, under (14) we are able to characterize regular variation of var(S n ) when lim inf n→∞ var(S n )/n > 0. More exactly, we have the following proposition.
Proposition 8. Assume lim inf n→∞ var(S n )/n > 0 and that (14) holds. Then, for α ∈ [1, 2), and a positive function h, slowly varying at infinity, var In particular, var(S n )/n is slowly varying at n → ∞ if and only if ν(U x )/x is slowly varying as x → 0 + .

Relation between the variance of partial sums and transition spectral measure of reversible Markov chains
We continue the study of stationary reversible Markov chains and provide further necessary and sufficient conditions for its variance to be regularly varying, in terms of the operator spectral measure by a direct approach, without the link with the spectral distribution function.
Theorem 9. Assume Q is self-adjoint, α ≥ 1, var(S n )/n → ∞, and let c α := α(2−α)/2Γ(3−α). Then Remark 10. It should be obvious from the statement in the above theorem, that regular variation of the variance is equivalent to regular variation of the transition spectral measure only in the case α > 1. As the following example demonstrates, in the case α = 1, there are reversible Markov chains whose variance of partial sums varies regularly with exponent 1 even though is not a regularly varying function.
Example 11. Take a probability measure υ on [−1, 1] defined for 0 < a < 1/2 by where c is the normalizing constant. Then, the unique invariant measure is We first compute the following integral whence, by Theorem 9 lim n→∞ var(S n ) n log n = c.
However, the covariances are not regularly varying because the spectral measure is not. To see why it is enough to show that r(x) is not regularly varying at 0. Indeed, if we take y k = e −2πk → 0 + , and y k = e π/2−2πk → 0 + , then r(y k ) = y k and r(y k )/y k → 1. However, for the choice r(z k ) = z k (1 + 2α), we have r(z k )/z k → 1 + 2α, and hence the spectral measure is not regularly varying.
Remark 12. Often in the literature, conditions for the linear growth of the variance are given in terms of the covariances (see for example [17]). As it turns out, one can construct positive covariance sequences such that n k=0 cov(X 0 , X k ) = h(n) is slowly varying, and hence the variance is regularly varying, but a n = cov(X 0 , X n ) > 0, is not slowly varying. To construct such a chain, suppose that ε n is an oscillating positive sequence such that ε n → 0 and k a k = ∞ where a k := ε k /k. Then g n = n k=1 a k is slowly varying since So, a priori we have many situations when var(S n ) = nh(n) even though the covariances (and hence the operator spectral measure) are not regularly varying.
The above proof was direct in the sense, that it relied only on the use of classical Tauberian theory without linking the transition spectral measure with the spectral distribution function, and thus without invoking the results of [7].

Examples of limit theorems with non-linear normalizer
As an application we construct a class of stationary irreducible and aperiodic Markov Chains, based on the Metropolis-Hastings algorithm, with var(S n ) ∼ nh(n). Markov chains of this type are often studied in the literature from different points of view, in Doukhan et al. [10], Rio ([29] and [30]), Merlevède and Peligrad [26], Zhao et al. [37]) and Longla et al. [24].
Let E = {|x| ≤ 1} and define the transition kernel of a Markov chain by where δ x denotes the Dirac measure and υ is a symmetric probability measure on [−1, 1] in the sense that for any A ⊂ [0, 1] we have υ(A) = υ(−A). We shall assume that We mention that Q is a stationary transition function with the invariant distribution Then, the stationary Markov chain (ξ i ) i with values in E and transition probability Q(x, A) and with marginal distribution µ is reversible and positively recurrent. Moreover, for any odd function g we have Q k (g)(x) = |x| k g(x) and therefore For the odd function g(x) = sgn(x), define X i := sgn(ξ i ). Then for any positive integer k g, Q k (g) = is slowly varying as x → 0 + . Our next result presents a large class of transition spectral measures for the model above which leads to functional central limit theorem.
Theorem 13. Let V (x) be slowly varying as x → 0 + . Then, the central limit theorem, the functional central limit theorem and the conditional central limit theorem hold for partial sums of X i defined above.
Next, we give a particular example of a Metropolis-Hastings algorithm in which a nondegenerate central limit theorem holds under a certain normalization. However, when normalized by the standard deviation we have degenerate limiting distribution.
On the other hand, let us choose b n such that nE[τ 2 1 I(τ 1 ≤ b n )]/b 2 n ∼ 1 as n → ∞. By Lemma 19 it follows that 2θnV (1/b n ) ∼ b 2 n , with θ as defined in Eq. (19). Note now that

Continuous-time Markov processes
Suppose we have a stationary Markov process {ξ t } t≥0 , with values in the general state space (S, A), and for g ∈ where L is the infinitesimal generator which we assume to be normal, which then implies that its spectrum is supported on {z ∈ C : ℜ(z) ≤ 0}, such that  Theorem 14. Let {ξ t } t be a stationary Markov process with normal generator L, invariant measure π, and let g ∈ L 2 (π) and ν = ν f be the transition spectral measure associated with L and g. Write (B z t ) t≥0 for a standard planar Brownian motion in C, started at the point z ∈ H − := {z ∈ C : ℜ(z) ≤ 0}. Also let Z be a random point in H − distributed according to ν and let τ Z For α ∈ (0, 2), the following statements are equivalent: The following theorem gives a necessary and sufficient condition in terms of the transition spectral measure ν. Define for x > 0, Theorem 15. With the notation of Theorem 14 the following are equivalent: (i) var(S T (g))/T → L = ς 2 + K, where K > 0; (ii) ς 2 < ∞ and ν(U x )/x → K/π as x → 0 + .

Proofs
Proof of Representation Lemma 1. For t ∈ [−π, π] and z ∈ D 0 define the function the Poisson kernel for the unit disk. Our approach is to integrate on D 0 with respect to ν 0 (dz), obtaining in this way a function defined on [0, 2π] as follows The function is well defined since we are integrating the positive Poisson kernel over the open disk, and in fact, by using polar coordinates, we also have Therefore, it is obvious that f ∈ L 1 (0, 2π), and it makes sense to calculate Now, by (7) the spectral distribution function F associated with the stationary sequence (X i ) i , is then given by (11).
Proof of Corollary 4. The result follows from Representation Lemma 1 and Theorem A. To obtain the last point in the theorem, by standard analysis, the spectral measure has the following useful asymptotic representation Proof of Theorem 5. As usual, let D be the closed unit disk, and D 0 its interior. From Representation Lemma 1 the spectral density f ∈ L 1 ([−π, π]) is given by the formula Notice that D is regular for Brownian motion, in the sense that all points in Γ = ∂D are regular, i.e. for all z ∈ ∂D and forτ z D := inf{t > 0 : B z t / ∈ D} we have P z {τ z D = 0} = 1. The harmonic measure in D from z is the probability measure on ∂D, hm(z, D; ·) given by where P z denotes the probability measure of Brownian motion started at the point z, and V is any Borel subset of ∂D.
Since ∂D is piecewise analytic, hm(z, D; ·) is absolutely continuous with respect to Lebesgue measure (length) on ∂D and the density is the Poisson kernel (see for example [23]). In the case of the unit disk D the density for hm(z, D; ·) for z ∈ D, w ∈ ∂D or t ∈ [0, 2π], is given by Let Z be a D-valued random variable with probability measure ν properly normalized, independent of the Brownian motion. Then On the other hand, since Γ = ∂D is regular for Brownian motion, we have for all z ∈ Γ, Therefore from Representation Lemma 1 we have The measure H ν,D (·), is essentially the harmonic measure when Brownian motion starts at a random point and stops when it hits ∂D. Finally, from Theorem A we conclude that var(S n ) is regularly varying if and only if the measure H ν,D is regularly varying at the origin.
Thus, by the Lebesgue dominated theorem and our conditions This, along with Fatou's lemma, proves that var(S n )/n → K exists if and only if σ 2 < ∞ and Now, let us introduce a new measure on D 0 which is finite when σ 2 < ∞.
To complete the proof of the first part of the theorem, we notice that by the spectral calculus since µ is a finite measure.
To prove the second part of this theorem, we show equivalence of (i) and (ii) and then of (ii) and (iii). In addition, throughout we use notation: We note first that |1 − z n | 2 = (1 − |z| n ) 2 + |z| n sin 2 (nθ/2).
The proof strategy consists in showing, several successive approximation steps, that for some appropriate measure G, and then to apply Theorem A. With this in mind we write Note that By Lebesgue dominated convergence theorem, since the bounded integral argument goes to 0 for each |z| ≤ 1, we have ∆ ′ n → 0 as n → ∞. Then, write and again Note that by Lebesgue dominated theorem ∆ ′′ n → 0 as n → ∞, since the bounded integral argument goes to 0 for each |z| < 1.
Fix now a small positive a > 0, recall that z = (1 − y)e iθ and define an auxiliary subset of D Further, notice that by the dominated convergence theorem ε n = Da sin(nArg(z)/2) nArg(z) µ(dz) → 0 as n → ∞, since the bounded integral argument goes to 0 for each |z| < 1.
Finally, let us define the set Then, it follows that I n /n → L if and only if where I n is the Fejer kernel, Define G(x) = ν(U x ) and notice that it is a non-negative, non-decreasing bounded function. In addition, for any step function g(θ) = I(u < |θ| ≤ v) = I(|θ| ≤ v) − I(|θ| ≤ w) with u, w being continuity points of G(x) we have and then, by Caratheodory and Lebesgue theorem we have completing the proof of equivalence of (i) and (ii) in Part (b).
On the other hand, on W x 1 − |z| |1 − z| 2 ≥ 1 2x and hence by (14), which implies that ν(D 1/x )/x → C if and only if (ii) holds as x → 0 + and this completes the proof.  iw). The finite measure ν is transformed to a finite measure ρ on H, which for simplicity we can assume to be a probability measure. Then, for w := a + ib we have and a further change of variables z = tan(t/2) gives Proof of Theorem 9. Let 1 ≤ α ≤ 2 and denote C(n) := n−1 i=0 cov(X 0 , X i ). We start by the well known representation It is clear then, since var(S n )/n → ∞, that var(S n ) has the same asymptotic behavior as 2 n k=1 C(k). Implementing the notations where ν 1 coincides with ν on (0, 1], and ν 1 ({0}) = 0, we have the representation We shall show that the terms C 1 (k) have a dominant contribution to the variance of partial sum. To analyze C 2 (k) it is convenient to make blocks of size 2, a trick that has also appeared in [17] where it is attributed to [15]. We notice that Furthermore, for all m Therefore |C 2 (k)| ≤ 2E(X 2 0 ) and so, n k=1 C 2 (k) ≤ 2nE(X 2 0 ). Because var(S n )/n → ∞ we note that var(S n ) has the same asymptotic behavior as 2 n k=1 C 1 (k). Now, each C 1 (k) = k−1 j=0 a k with a k > 0. So, because the sequence (C 1 (k)) k is increasing, by the monotone Tauberian theorem (see Theorem 18 in [2, Cor 1.7.3]) for all α ≥ 1 we have Note now that It is convenient to consider the transformation T : Then C 1 (n) = Therefore By the change of variables 1 − u = e −y we have It follows that From here we shall apply Karamata's Tauberian Theorem 17. Since α ≤ 2, if and only if Again by the monotone Karamata Theorem 18 this happens if and only if Changing variables x = 1 − e −y and taking into account Karamata's representation for slowly varying functions we get By combining the results in relations (22)- (27) we have that It remains to note that When 1 < α < 2, one can say more: the distribution function induced by the spectral measure is regularly varying.
Note that again by Theorem 18, since the sequence a k is a monotone sequence of positive numbers and α − 1 > 0 we have Then, we obtain as before, by the properties of slowly varying functions This last relation combined with (24) gives the last part of the theorem.
Proof of Theorem 13. We only prove the central limit theorem, with the other results following in a similar manner. Our approach is based on the regeneration process. Define and let τ k := T k+1 − T k . It is well known that (ξ τ k , τ k ) k≥1 are i.i.d. random variables with ξ τ k having the distribution υ. Furthermore, Then, it follows that So, by the law of large numbers T n /n → θ a.s. Let us study the tail distribution of τ 1 . Since by integration we obtain Using now the relation between υ(dx) and µ(dx) and symmetry we get where ν is spectral measure. By using the fact that V (x) is slowly varying and Lemma 19 in Section "Technicalities" it follows that For each n, let m n be such that T mn ≤ n < T mn+1 . Note that we have the representation n k=1 where Y k = τ k X τ k is a centered i.i.d. sequence, and by (29) is in the domain of attraction of a normal law (see Feller [12]). Therefore, where b 2 n ∼ nH(b n ). The rest of the proof is completed on the same lines as in the proof of Example 12 in [24], the final result being that Proof of Theorem 14. The proof is similar to that of Theorem 5 once one observes that for z = a + ib, with a ≤ 0 and b ∈ R, t > 0 and x ∈ R we have Therefore, by Fubini's theorem By letting z = wi, where w ∈ H, and the conformal invariance of Brownian motion, one can immediately deduce that is the harmonic measure of Brownian motion in the left half-plane started at the point z.
Proof of Theorem 15. First observe that .
Next we analyze I 2 /T . Notice that |1 − e zT | 2 = (1 − e T x ) 2 + 4e T x sin 2 (T y/2), and since (1 − e T x ) 2 ≤ T |x| for x < 0, we have that where G(x) = ν(U x ), using similar arguments to the proof of Theorem 6. Now we have The result then follows from Theorem A.
Then for any δ ∈ (0, 1) Then R can also be written as and since V (1/y) is slowly varying as y → ∞ we have by Theorem 1.6.5 in [2] that r(z) zV (z) → 0, as z → 0 + . Now let K be an arbitrary positive number. Since we will first take limits as x → 0 + , we can assume that x is small enough so that Kx < 1. Therefore splitting the integral and since K > 0 is arbitrary I 2 (1/x)/V (x) → 1, as x → 0 + and u = 1/x. Therefore for u = 1/x, as above we have lim sup and since K is arbitrary the claim follows from the fact that