First and second order analysis for periodic random arrays using block bootstrap methods

In the paper row-wise periodically correlated triangular arrays are considered. The period length is assumed to grow in time. The Fourier decomposition of the mean and autocovariance functions for each row of the matrix is presented. To construct bootstrap estimators of the Fourier coeﬃcients two block bootstrap techniques are used. These are the circular version of the Generalized Seasonal Block Bootstrap and the Circular Block Bootstrap. Consistency results for both methods are presented. Bootstrap-t equal-tailed con(cid:12)dence intervals for parameters of interest are constructed. Results are illustrated by an example based on simulated data.


Introduction
In the recent years the number of bootstrap applications for periodic processes has constantly grown. An important class of periodic processes are periodically correlated (PC) processes, which were introduced by Gladyshev (1961). They are widely used for modeling in many different settings such as climatology, hydrology, mechanics, vibroacoustics and economics. Many motivating examples can be found in Gardner et al. (2006) Hurd and Miamee (2007), Antoni (2009) and Napolitano (2012). Time series X t is called PC with period d if it has periodic mean and auto-1 covariance functions, i.e. E (X t+d ) = E (X t ) and B(t, τ ) = Cov (X t , X t+τ ) = Cov (X t+d , X t+τ +d ) (1) for each t, τ ∈ Z. The period d is taken as the smallest positive integer such that (1) holds. There are two block bootstrap methods that can be applied for PC processes. These are the Moving Block Bootstrap (MBB) and the Generalized Seasonal Block Bootstrap (GSBB). The first method was introduced independently by Künsch (1989) and Liu and Singh (1992). It is very general and it does not preserve the periodic structure contained in the data. As a result the number of possible applications of this technique for PC time series is limited. On the other hand, the GSBB of Dudek et al. (2014a) was designed for periodic processes. It keeps the periodicity but to apply it one needs to know the period length. The GSBB is the generalization of two block bootstrap methods: the Seasonal Block Bootstrap (SBB) (Politis (2001)) and the Periodic Block Bootstrap (Chan et al. (2004)). The first and the second order characteristics of PC time series that are often considered can be split into two groups depending on the domain in which the analysis is performed. In the time domain one can be interested, for example, in the overall mean and the seasonal means. In the frequency domain these are Fourier coefficients of the mean and the autocovariance functions. For all mentioned characteristics consistency of the MBB and the GSBB has been already shown (see Synowiecki (2007), Dudek et al. (2014a), Dudek et al. (2014b), Dudek (2015)). PC processes can be applied when the period length is constant. In this paper we focus on a different case, i.e. when the period length is changing over time. A very important example illustrating this phenomena is a chirp signal. It is a signal in which frequency increases or decreases over time. It can be described by the following equation where S(u) is a positive, low-pass, smooth amplitude function whose evolution is slow when compared to the oscillations of the phase ϕ(u). Chirps are commonly met in nature. For example in audio signals (animal communication, echolocation), radar and sonar systems, astrophysics (gravitational waves radiated by coalescing binaries), mechanics and vibrations (e.g. car engines), medicine (EEG data -epileptic seizure) and seismography. For more details and examples we refer the reader to Flandrin (2001) and the references therein. In this paper we consider a case, when period length is growing in time. This corresponds to the so-called down-chirp signal. An example of such signal is presented in Figure 1. To model such phenomena periodic random arrays can be used. Bootstrap for triangular arrays with the period growing in time was first considered by Leśkow and Synowiecki (2010) and later by Dudek et al. (2014a). In both papers only the problem of the estimation of the overall mean was discussed. Leśkow and Synowiecki showed the consistency of the Periodic Block Bootstrap and Dudek et al. of the Generalized Seasonal Block Bootstrap. In this paper we would like to extend the area of possible applications of bootstrap methods for the periodic random arrays to the Fourier coefficients of the mean and autocovariance functions. The paper is organized as follows. Section 2 contains the formulation of the problem. In Section 3 the algorithms of the circular version of the GSBB and the MBB are recalled. The bootstrap consistency results for the coefficients of the mean and the autocovariance functions for triangular arrays with growing period are presented in Section 4. In Section 5 the bootstrap-t confidence intervals are constructed and a simulated data example is presented. Finally, in Section 6 we discuss possible future extensions of our work. All proofs can be found in the Appendix.

Problem formulation
Let {X n,t : t = 1, . . . , m n } be an array of real valued random variables. We assume that it is row-wise periodically correlated (PC), i.e. in each row the mean function µ n (t) = E (X n,t ) and the autocovariance function B n (t, τ ) = Cov (X n,t , X n,t+τ ) are periodic in variable t with period d n .
Below we present the Fourier decomposition of the mean and the autoco-variance functions in the n-th row of the triangular array. We have where the sets are finite and Γ n , Λ n,τ ⊆ {2kπ/d n , k = 0, . . . , d n − 1}. Moreover, the coefficients b(γ) and a(λ, τ ) can be calculated using the following formulas and their estimators are of the form (X n,t+τ − µ n (t + τ )) (X n,t − µ n (t)) exp(−iλ n t), (4) where µ n (t) = ∑ γn∈Γn b n (γ n ) exp(iγ n t). In the following we use the simplified version of (4). Without loss of generality we can assume that E(X n,t ) ≡ 0 and taking τ ≥ 0 we get a n (λ n , τ ) = 1 m n mn−τ ∑ t=1 X n,t X n,t+τ exp(−iλ n t).
Detailed information concerning the coefficients of the mean and autocovariance functions and their estimators can be found in the papers of Hurd (1989), (1991) and Hurd and Leśkow (1992a), (1992b).

Bootstrap algorithms
In this paper we apply two block bootstrap methods. However, these will not be the MBB and the GSBB in their standard form, but their circular versions. In this approach the data are treated as wrapped on the circle. This allows us to reduce the edge effects caused by the fact that in the MBB and the GSBB pseudo-samples the observations from the beginning and the end of the original sample appear less often than the other observations.
To simplify notation in this section we assume that Y t is a PC process with the known period d. Assume that (Y 1 , . . . , Y n ) is the observed sample. Moreover, let the sample length n be an integer multiple of the period length d (n = wd, w ∈ N ). As first we present the circular version of GSBB (cGSBB).

cGSBB algorithm
1. Choose a (positive) integer block size b(< n). Then, sample size n can be split into l blocks of the length b and a shorter block of the length r i.e., n = lb + r, l ∈ N and r ∈ {0, . . . , b − 1}.
2. For t = 1, b + 1, 2b + 1, . . . , lb + 1, let where k t is iid from a discrete uniform distribution Since we consider the circular version of GSBB, when t + vd > n we take the shifted observations t + vd − n.
3. Join the l blocks (Y kt , Y kt+1 , . . . , Y kt+b−1 ) and take the first n observations (Y * 1 , . . . , Y * n ) to obtain a bootstrap sample of the same length as original one.
The second bootstrap method that we consider is the Circular Block Bootstrap of Politis and Romano (1992), which is a circular version of the Moving Block Bootstrap. Comparing with the cGSBB the only difference is in the second step of the algorithm. We do not keep the periodic structure of the data, so we do not need to restrict the set of blocks which are selected.

CBB algorithm
1. Choose a (positive) integer block size b(< n). Then, sample size n can be split into l blocks of length b and a shorter block of length r i.e., n = lb + r, l ∈ N and r ∈ {0, . . . , b − 1}.
3. Join the l blocks (Y kt , Y kt+1 , . . . , Y kt+b−1 ) and take the first n observations (Y * 1 , . . . , Y * n ) to obtain a bootstrap sample of the same length as original one.

Main results
In the first part of this section we use the cGSBB method to construct consistent estimators of the coefficients of the autocovariance function defined by formula (5). Then, the corresponding result for the coefficients of the mean function is presented. The second part is dedicated to the CBB method.
Let m n = l n b n + r n , where r n ∈ {0, . . . , b n − 1} i.e., the observations in the n-th row of the considered triangular array {X n,t : t = 1, . . . , m n } can be split into l n blocks of the length b n and some remaining part of length r n . For the sake of simplicity we use notation r, l, b instead of r n , l n , b n , whenever it is possible. Moreover, without loss of generality we assume that m n = w n d n , w n ∈ N , i.e. each row of the considered array contains w n periods. To obtain our results additionally we need to assume the following conditions: A1 let {X n,t : t = 1, . . . , m n } be a WP(4) row-wise periodically correlated array of real valued random variables with period d n ; WP(4) denotes the weakly periodic process of order 4. This means that E|X n,t | 4 < ∞ and for any t, τ 1 , τ 2 , τ 3 ∈ Z E(X n,t X n,t+τ 1 X n,t+τ 2 X n,t+τ 3 ) = E(X n,t+dn X n,t+τ 1 +dn X n,t+τ 2 +dn X n,t+τ 3 +dn ); A2 m n = w n d n , d n → ∞ and w n → ∞ as n → ∞; A3 let {α n (k), k = 1, 2, . . . } be α-mixing coefficients of the nth row of the array {X n,t }, i.e.
where F n (t 1 , t 2 ) is a σ-field generated by observations {X n,t 1 , X n,t 1 +1 , . . . , X n,t 2 }; Condition A2 denotes that the period length d n and the number of observed periods w n are growing to infinity together with the sample size. Constant q in A4 will take value 1 or 2 depending on the considered case. α-mixing assumption A4 and A5 are technical and also appear in Synowiecki (2007). These kind of conditions are also considered for usual PC time series (see e.g. Synowiecki(2007), Dudek et. al (2014a)). For more details on mixing we refer the reader to Doukhan (1994).
As first we define three different bootstrap versions of a n (λ n , τ ).
where X * n,t for t = 1, . . . , m n is the cGSBB version of the n-th row of X n,t . Note that in the formulas (7) and (8) the symbol '*' is present additionally in the argument of the exponential function. In this case the time argument no longer varies from 1 to m n − τ , but corresponds to the time indices of observations chosen in the block selection process. To be more precise let us assume that the chosen block (X * n,t , . . . , X * n,t+b−1 ) is of the form (X n,kt , . . . , X n,kt+b−1 ). Then the values (t * , . . . , Estimator (6) is the most natural. It is constructed by replacing in formula (5) the original sample by its bootstrap counterpart. As we show later this estimator is consistent, because the GSBB preserves the periodic structure in the bootstrap sample. This form of estimator was also applied in Dudek et al. (2014b) for the autocovariance function of PC time series. The second estimator uses ideas presented in Dudek (2015). In this paper results from Dudek et al. (2014b) are extended to the wider class of processes called almost periodically correlated. Since in this case the period usually does not exist, the GSBB cannot be used and hence Dudek considered the CBB. One may notice that (6) and (7) are equal for λ n ∈ Λ n,τ . Finally estimator (8) was obtained by removing some summands from (7). To be more precise (8) has been got from (7) after substraction of those summands for which X * n,t and X * n,t+τ belong to two different blocks selected in the second step of the cGSBB algorithm. Let B * 1+kb for k = 0, . . . , l − 1 be the block of length b of the form (X * n,1+kb , . . . , X * n,(k+1)b ). The last block of length r is denoted by B * 1+lb = (X * n,lb+1 , . . . , X * n,mn ). Since (X * n,1 , . . . , X * n,mn ) = (B * 1 , . . . , B * 1+lb ), we rewrite formula (7) using the new notation.
where the set C * b,τ contains those time indices for which X * n,t and X * n,t+τ belong to different blocks Estimator (8) is the most convenient for practical applications among the proposed formulas. As we discuss in Section 5 construction of the bootstrap confidence intervals for a n (λ n , τ ) is computationally very expensive. In our study we will generate 500 000 bootstrap samples. Estimator (8) allows us to reduce this cost substantially. To obtain (6) and (7) one needs to calculate the estimator value according to the appropriate formula for each generated bootstrap sample. Formula (8) allows for a different approach. Note that each summand in (8) is based on a different block B * 1+kb , k = 0, . . . , l. Thus, instead of using cGSBB to get (X * n,1 , . . . , X * n,mn ) and then calculating a * n (λ, τ ), one can create new random variables for j = 1, . . . , m n . Y b j and Y r j represent these parts of estimator a n (λ n , τ ) that are based on blocks starting with observation X n,i of length b and r, respectively. As in cGSBB algorithm we treat data as wrapped on the circle, which means that if any time index t is bigger than m n we take t−m n instead. Now a * n (λ, τ ) can be rewritten as follows where k 1+kb , k = 0, . . . , l are iid random variables form step 2 of the cGSBB algorithm (see Section 3). Thus, when we calculate values of Y b j and Y r j , we can very easy obtain many values of a * n (λ, τ ). It is enough to generate the appropriate number of strings k 1+kb , k = 0, . . . , l and sum the corresponding Y b k 1+kb and Y r k 1+lb values, which is computationally very cheap.
Below we present the cGSBB consistency result. For the sake of simplicity we formulate theorem using notation a * n (λ n , τ ), which represents any of the estimators (6)- (8). That means that a * n (λ n , τ ) can be replaced by any of the estimators (6)- (8).
and σ 2R n , σ 2I n are assumed to be positive. Moreover, The analogous result can be also formulated in terms of the coefficients of the mean function. In this case we consider two bootstrap estimators of (3): One may note that the estimator of (3) corresponding to (8) is exactly equal to (14). Moreover, if m n = l n b n the estimator (14) obtained using the CBB method is unbiased, i.e. E * (b * n (γ n )) = b n (γ n ). If additionally m n = w n d n , its cGSBB version is also unbiased. One should be aware that this property does not hold for any of the bootstrap estimators (6)-(8). Using the CBB or the cGSBB ensures that each observation appears in the same number of blocks, but this fact allows us to provide unbiased estimator only when τ = 0. In this case estimator (7) is unbiased under the same conditions as just discussed for estimator (14). If τ > 0, then the bias is caused by the fact that joining selected blocks to create a bootstrap pseudo-sample we introduce a dependence structure that was not present in the original sample. To calculate X * t X * t+τ for X * t and X * t+τ belonging to two consecutive bootstrap blocks, we often use observations that were not τ time units apart in the original data. This effect cannot be removed by using circular versions of block bootstrap methods.
In the theorem below we present bootstrap consistency for the coefficients of the mean function. For simplicity we use notation b * n (γ n ) to avoid the formulation of the result for (13) and (14) separately.
and σ 2R n , σ 2I n are assumed to be positive. Moreover, B X n (t, τ ) is the autocovariance function in n-th row of the array X n,t . Moreover, σ * R n , σ * I n are the bootstrap counterparts of σ R n , σ I n .
The advantage of the GSBB method is that it preserves the periodic structure contained in the data. However, to use it we need to know the period length. In some applications (for more details we refer the reader to Napolitano (2012)) period length is not obvious. So far no research was done to prove consistency of the GSBB in such a case. Moreover, a study of the impact of the period length estimation error on the results should be performed. The alternative approach in such a situation is to apply the block bootstrap method that does not require knowledge of the period length such as the CBB. Please note that since the bootstrap sample obtained with the CBB is usually not periodic in moments, the estimator (6) in the general case is not consistent.  (7) or (8) is considered, the assertions continue to hold.
Remark In this paper we consider the circular versions of the block bootstrap methods. The main reason for that is to reduce the edge effects. In the cGSBB and the CBB each observation is present in the same number of blocks. When the GSBB or the MBB is used, the observations from the beginning and the end of the sample appear in less blocks than the others. That can cause an additional bias. However, if in the theorems presented in this section the cGSBB and the CBB are replaced by the GSBB and the MBB respectively, all assumptions will hold. The changes in proofs will be minor and hence we do not present technical details.

Confidence intervals
In this section we construct the bootstrap equal-tailed confidence intervals for the real and the imaginary parts of a n (λ n , τ ) using the bootstrap-t method (see Efron and Tibshirani (1993)). So far in all applications for PC processes the percentile bootstrap confidence intervals were constructed (see e.  (2015)), because the bootstrap was applied to avoid variance estimation. In our simulation we substitute the variance estimator by its bootstrap counterpart (see chapter 6 in Efron and Tibshirani (1993)). Moreover, to calculate bootstrap quantiles we need to compute a bootstrap estimate of standard error for each bootstrap sample. Thus, we have two nested levels of bootstrap sampling (see p. 162 in Efron and Tibshirani (1993)). To be more precise for a given row of our triangular array {X n,t : t = 1, . . . , m n } and chosen set of frequencies λ n and τ we generate B = 1000 bootstrap samples. Then, for each sample we get ℜ a * ,i n (λ n , τ ) and ℑ a * ,i n (λ n , τ ), i = 1, . . . , B.  , τ ), where i = 1, . . . , B and j = 1, . . . , B 1 . Finally, where . The quantiles are obtained using studentized statistics of the form Finally, for each point (λ n , τ ) the confidence intervals for ℜa n (λ n , τ ) are of the form ( a n (λ n , τ ) − t * R 0.975 σ * R n , a n (λ n , τ where t * R 0.025 , t * R 0.975 are the 2.5% and 97.5% quantiles, respectively. Moreover, where ℜ a * ,· n (λ n , τ ) = 1/B ∑ B i=1 ℜ a * ,i n (λ n , τ ). The confidence interval for and ℑa n (λ n , τ ) is constructed correspondingly.
Below we provide an application of our results to the Doppler effect. The Doppler effect is a change in the frequency of the signal caused by contractions/dilations of time. It appears when the source (called the transmitter) emitting signal and observer (called the receiver) exhibit relative motion. It can be heard e.g. when a vehicle sounding a siren is approaching a person. In our study we consider continuous time process of the form where u ∈ R and γ is chirp rate, i.e. the instantaneous rate of change of the frequency of a waveform. The chirp χ(u) describes the received signal in the case of the transmitted signal S(u) cos (2πf 0 u) and the relative motion between transmitter and receiver, with constant relative radial acceleration. A radial acceleration exists when an object moves on a curve. For more details we refer the reader to Napolitano (2012) Secs. 2.2.6.1 and 7.8.1.1. If S(u) is wide-sense stationary, then S(u) cos (2πf 0 u) is cyclostationary and χ(u) is generalized almost-cyclostationary (GACS). In the engineering language χ(u) is said to have a time-varying carrier frequency f 0 + γ 2 u. Let us consider the uniform sampling of χ(u) with sampling period T s = 1/f s and let us assume that it is sampled in several time intervals labeled by n = 1, . . . , N. In a generic time interval (u n,0 , u n,0 + g n T s ) the time-varying carrier frequency of the chirp modulated process χ(u) ranges from f 0 + γ 2 u 0,n to f 0 + γ 2 (u 0,n + g n T s ). Under the assumption that the carrier frequency can be considered constant and equal to f 0 + γ 2 u n,0 in the whole interval. In such a case χ(u) for u ∈ (u n,0 , u n,0 + g n T s ) can be modeled as cyclostationary with period ) .
If T n,0 /T s = q n is rational and q n = c n /d n with c n and d n relatively prime then the discrete-time sampled signal X n,t = χ(u)| u=tTs for u ∈ (u n,0 , u n,0 + g n T s ) is cyclostationary with period d n (see Izzo and Napolitano (1996) and (1997)). If T n,0 and T s are incommensurate, then X n,t is almost cyclostationary with cycle frequencies kT s /T n,0 modulo 1, where k is an integer number. In addition, it should be noticed that in order to satisfy (22), the sample size of the block g n cannot be too large. Consequently, g n cannot be increased without a bound. Therefore, a limit exists to the best accuracy that can be achieved with the proposed technique in the considered example.
For our study the following values of parameters were set: where ε t are iid random variables from the standard normal distribution.
We  (6). Thus, we expect to detect frequencies that are close to these values. For all considered block lengths b results are very similar and hence we restrict the presentation only to b = 3 (see Figure 2). Moreover, Table 1 contains the detected frequencies, i.e. those for which appropriate confidence intervals do not contain 0. One may notice that for the cGSBB we detect the same frequencies independently of the value of b. What is the most important is that these are exactly frequencies that are closest to the true ones. The situation is similar for the CBB used for ℑa n (λ n , 0). Even if the number of detected frequencies is not constant (for b = 3 we have 5 frequencies and for b ∈ {6, 9, 18} 3 frequencies), pointed frequencies are again the closest to the true ones. Additionally, for b ∈ {6, 9, 18} these are exactly the same frequencies as were detected using cGSBB. The situation is slightly different in the case of ℜa n (λ n , 0). We have 5 significant frequencies for b ∈ {3, 9, 18} and 6 for b = 6. We always detect 0Hz, 0.34Hz and 0.66Hz. These are frequencies that were also detected using cGSBB for ℜa n (λ n , 0). But among other frequencies we have definitely false detections: λ n = 0.5Hz (for b = 6), λ n = 0.39Hz and λ n = 0.61Hz (for b = 18). It seems that the CBB is more sensitive to the block length choice. However, one should remember that the CBB does not require knowledge of the period length. In the case when d n needs to be estimated, the estimation error can affect the performance of the GSBB significantly. Since the CBB does not use d n , it can be the best choice in many practical applications.  Table 1: For each value of block length b the detected frequencies (Hz) using 95% bootstrap-t equal-tailed pointwise confidence intervals for ℜa n (λ n , 0) (column 2 and 4) and ℑa n (λ n , 0) (column 3 and 5) are presented. Columns 2-3 are results for cGSBB, columns 4-5 results for CBB. together with bootstrap-t pointwise equal-tailed confidence intervals (black). First and second row results are for cGSBB and CBB, respectively. Nominal coverage probability is 95%, block length b = 3. On the x-axis frequencies λn/(2π).

Conclusions and open questions
In this paper we consider time series with growing period. Under some moments and mixing assumptions we provide consistency results for two block bootstrap approaches, namely the Circular Block Bootstrap and the circular version of the Generalized Seasonal Block Bootstrap. The usual MBB and GSBB can also be applied for our problem. The consistent bootstrap-t equal-tailed confidence intervals for the coefficient of the mean and the autocovariance function are constructed. However, the considered model is very important and can be applied in many different fields e.g. for chirp signals, there is a further need to continue research in this topic. In Section 5 we considered the simple chirp signal with stationary component. However, in some applications this part of the signal is assumed to be heavy-tailed (see  )) and hence our future research will be dedicated to the extension of our results to these kind of models.

Appendix
Proof of Theorem 4.1 Without loss of generality we assume that m n is an integer multiple of the block length b n (m n = l n b n ). To simplify notation we denote b n = b.
To show asymptotic normality of the considered estimator we use Lemma 4 from Leśkow and Synowiecki (2010). We show details of the proof only for the real part of the estimator a n (λ n , τ ). The reasoning for the imaginary part is the same. Denote by {W n,t : t = 1, . . . , m n − τ } the array with elements are of the form W n,t = X n,t X n,t+τ cos (λ n t). Since X n,t is row-wise PC and X t is WP(4) we have that the array X n,t = X n,t X n,t+τ is row-wise PC with period d n . Moreover, cos (λ n t) is a periodic function with period d n . Thus, the array W n,t is also row-wise PC with period d n . Moreover, we show that Note that Cov ( X n,t , X n,s ) cos (λ n t) cos (λ n s) + The array X n,t is row-wise α-mixing with α X n (k) = α n (k − τ ) for k > τ and α X n (k) = α n (0) otherwise. First we show that the third summand is For the second summand the reasoning is the same so this case will be omitted. Note that Cov ( X n,t , X n,s ) cos (λ n t) cos (λ n s) + where C is a positive constant independent of n. Moreover the first summand in (24) can be written as cos (λ n t) cos (λ n s) .
The absolute value of the last term on the right-hand side can be bounded from above by The summands of the right-hand side we denote by I, II and III, respectively.
Using (25) and assumption (ii) we have that where C 1 is a positive constant independent of n. Analogously one can show that I = O(1/m n ). Additionally, using assumption (ii) we get that where C 2 is a positive constant independent of n.
To get (23)  Cov ( X n,t , X n,s ) cos (λ n t) cos (λ n s) − σ 2R n Since |p/m n − 1/d n | → 0 as n → ∞, the expression on the left-hand side can be rewritten equivalently as Cov ( X n,t , X n,t+s ) .
Using (25) the right-hand side of the above inequality can be bounded by Form the assumption (ii) and Toeplitz Lemma the last expression tends to 0 as n → ∞ and simultaneously we get (23). As a result the W n,t /σ R n fulfills the assumptions of Lemma 4 from Leśkow and Synowiecki (2010) and To prove the bootstrap consistency we use techniques developed in Dudek (2015) and Dudek et al. (2014b). We start with the estimator a * n (λ, τ ) and a * n (λ, τ ). We show only (9) because proof of (10) is analogous. The real part of ℜ a * n (λ n , τ ) can be rewritten as following Thus, ℜ a * n (λ, τ ) is obtained from ℜa * n (λ, τ ) after substraction of those summands for which X * n,t and X * n,t+τ belong to two different blocks. Moreover , ℜ a * n (λ, τ ) and ℜa * n (λ, τ ) are asymptotically equivalent i.e. √ m n |ℜ (a * n (λ n , τ )) − ℜ ( a * n (λ n , τ )) − E * (ℜ (a * n (λ n , τ ))) − E * (ℜ ( a * n (λ n , τ )))| The proof follows the same steps as the one presented in Dudek et al.
The bootstrap version of Z t,b is of the form For t = 1, b + 1, . . . , (l − 1)b + 1 the random variables Z * t,b are conditionally independent (given the sample (X 1 , . . . , X n ) with common distribution w n for k = 0, . . . , w n − 1.
By Corollary 2.4.8 from Araujo and Giné (1980) to get condition (9) we need to show that for any ν > 0 The proof of (26)- (28) is similar to the one presented in Dudek et al. (2014b).
Note that E 1/ √ bZ 1+kb+sd,b from Synowiecki (2007)). Following the reasoning of Dudek et al. (2014a) we get that the absolute expected value of (26) and (27) can be bounded from above by respectively. The condition (28) can be expressed as follows The terms on the right-hand side we denote by IV and V , respectively. Following the reasoning of Dudek et. al (2014a) and Leśkow and Synowiecki (2010) we get that |I − σ 2R n | tends to 0 in probability as n → ∞. The key step is to show that ) .
Additionally, since {W n,t } is row-wise periodically correlated we get where C 3 is some positive constant independent of n.
Under assumption (iii) the sum on the right-hand side can be bounded from above by where C 4 is some positive constants independent of n. Note that V is tending to 0 in probability. The condition (ii) for α-mixing function is corresponding to one used by Dudek et al. (2014a). But since the period length d n → ∞ for some ξ > 1/2 we have which is the main difference compared to the arguments provided by Dudek et al. (2014a). This ends proof of theorem for ℜa * n (λ n , τ ) and ℜ a * n (λ n , τ ). Finally, to get corresponding result for ℜ a * n (λ n , τ ) it is enough to follow the same reasoning as presented above with essential modifications of notation and using the fact that rows of the triangular array are W P (4).

Proof of Theorem 4.2
Since the proof is the simplified version the reasoning presented in the proof of Theorem 4.1, it is omitted.

Proof of Theorem 4.3
The proof follows exactly the same steps as the proof of Theorem 4.1 and hence it is omitted. The main difference is the number of blocks that can be selected while the bootstrap sample is constructed. For the cGSBB this number is equal to w n , while for the CBB it is equal to the length of the sample m n .