Stationary subspace analysis based on second-order statistics

In stationary subspace analysis (SSA) one assumes that the observable p-variate time series is a linear mixture of a k-variate nonstationary time series and a (p-k)-variate stationary time series. The aim is then to estimate the unmixing matrix which transforms the observed multivariate time series onto stationary and nonstationary components. In the classical approach multivariate data are projected onto stationary and nonstationary subspaces by minimizing a Kullback-Leibler divergence between Gaussian distributions, and the method only detects nonstationarities in the first two moments. In this paper we consider SSA in a more general multivariate time series setting and propose SSA methods which are able to detect nonstationarities in mean, variance and autocorrelation, or in all of them. Simulation studies illustrate the performances of proposed methods, and it is shown that especially the method that detects all three types of nonstationarities performs well in various time series settings. The paper is concluded with an illustrative example.


Introduction
Multivariate time series data are observed in many application areas and are often very challenging to model.To ease the analyses, it is often assumed that the observed time series can be decomposed into latent components with different exploitable properties.One common approach is to apply one of the blind source separation (BSS) methods to observed data in order to estimate the latent components as a pre-processing tool.In a special case of BSS, that is, in second-order source separation (SOS), it is assumed that the latent components are uncorrelated second-order stationary time series, and in independent component time series model, the assumption on uncorrelatedness is replaced with the assumption on independence (see e.g.[1], and references therein).In nonstationary source separation (NSS), as defined in [2], the variances of uncorrelated sources are allowed to change over time.For a recent review of these and other variants of BSS, see [3,4,5].
In stationary subspace analysis (SSA), the underlying model states that the observable p-variate time series is a linear mixture of a k-variate nonstationary time series and a (p − k)-variate stationary time series.The aim is then to factorize the multivariate time series onto stationary and nonstationary components.Such a factorization is useful in several real world applications, e.g., in biomedical signal processing, speech recognition, image analysis and econometrics, where either the stationary or the nonstationary components are of interest.SSA was introduced in [6], where the matrix that separates the subspaces of stationary and nonstationary components was found by dividing the observed multivariate time series into K segments and minimizing a Kullback-Leibler (KL) divergence between Gaussian distributions thus measuring differences in means and covariances across segments.We denote such method as KL-SSA.Notice that in KL-SSA the time series are considered as stationary if the first two moments are time-invariant.The stationarity with respect to the autocorrelations is not taken into account.[6] discussed the theoretical properties of the proposed method and proposed a sequential likelihood ratio test for testing the dimension of the stationary subspace.The method was applied to the EEG data analysis in [6,7] and to computer vision in [8], and extended to change-point detection in [9,10], where the nonstationary components were of the main interest.In [11,12] several alternatives for the Kullback-Leibler divergence in SSA were given.
In this paper we propose a novel SSA algorithm that detects nonstationarities in the first two moments as well as in autocorrelations.The method can be seen as an extension of analytic SSA (ASSA), which is an SSA method suggested in [13], to a more general multivariate time series setting.Notice that [14] considered an alternative extension via a frequency domain approach.As another contribution we make a connection between SSA and supervised dimension reduction (SDR) methods.The paper is organized as follows.In Section 2 we first review the model assumptions underlying SSA and discuss the two classical methods for performing SSA, that is, KL-SSA and ASSA.We also show how SSA can be seen as a supervised dimension reduction method and propose and compare methods for detecting different type of nonstationarities.Section 3 discusses practical issues and identifiability of stationary and nonstationary components.Section 4 presents several simulation studies for comparing performances of different methods under various scenarios and an example.The paper is concluded with some discussion in Section 5.

Stationary subspace analysis
In this section we review the model underlying SSA, show some connections between SSA and supervised dimension reductions methods, and suggest methods to detect changes in mean, variance, correlation structure or in all of them.Tools to identify the type of nonstationarity the components exhibit are also given.

Model formulation and notations
Let x t be an observable p-variate nonstationary time series which can be decomposed into a stationary part and a nonstationary part according to the SSA model as defined in [6] x where a p-variate latent time series z t consists of a k-variate nonstationary time series n t and a (p − k)-variate stationary time series s t .The two components are mixed using a full-rank p × p matrix A.Here matrices A s and A n are p × (p − k) and p × k matrices, respectively.The aim of SSA is to estimate the unmixing matrix W = A −1 so that Wx t is partitioned into stationary and nonstationary time series.Depending on the assumptions on s t and n t , different ways to estimate W are suggested in the literature [6,13].In this paper we make the following assumptions on the latent time series: (A1) E(s t ) = 0, Cov(s t ) = I p−k and Cov(s t , s t+τ ) < ∞, (A2) E(n t ) < ∞ and Cov(n t ) = D t where D t is a diagonal matrix with positive diagonal elements, (A3) Cov(s t , n t+τ ) = 0 for all τ ≥ 0.
We thus assume that the (p − k) time series in s t are second-order stationary with finite autocovariances.The first assumption fixes the location and covariance matrix of the stationary part for convenience.The second assumption states that the k nonstationary components in n t have finite first moments and are uncorrelated, and the third assumption states that the nonstationary components are uncorrelated with the stationary components.
Despite the assumptions the model is not well-defined as there are many divisions of x t such that the assumptions hold.
Here we are interested in the one with the minimal value of k.The stationary components are only specified up to a rotation by an orthogonal matrix, whereas the nonstationary components can be marginally rescaled, shifted and also rotated.Therefore for convenience of presentation we make the following additional assumption on n t : Hence for the observed time span T the location and scale are considered as fixed.
The common idea in [6,13] was to divide the observed time series into K segments, then compute for each interval the first and the second order statistics and to measure their nonconstancy across segments.To be more exact, assume that x t is observed at time points 1, . . ., T , and let T 1 , . . ., T K denote K disjoint subsets of T , e.g., non-overlapping intervals.Let then denote the sample mean and the sample (auto)covariance matrix computed using the data from interval T i , respectively.In [6,13] the unmixing matrix revealing the stationary sources was chosen so that the means and covariance matrices vary the least across all intervals.It was also shown that using the standardised time series defined as the search can be restricted to orthogonal matrices only.In [6] KL divergence between the Gaussian distributions on each intervals and the standard normal distribution was minimized.Further, in [13] a second-order Taylor approximation was applied to the objective function of KL-SSA and the SSA problem was shown to reduce to a simple generalized eigenvalue problem.We review the resulting analytic SSA (ASSA) method in Section 2.7.Notice that in both approaches the focus was on detecting stationarity deviations in mean and in variance but not in autocorrelation.

Connection to supervised dimension reduction
Before going into specific SSA methods we point out a connection between SSA and supervised dimension reduction (SDR).In supervised dimension reduction, we have a response z and a p-variate predictor vector x and the goal is to find a k × p matrix W such that W x carries all relevant information about z, that is, x⊥ ⊥z | W x with the smallest possible value of k p. See [15,16] for some general overviews.Two popular methods in this context are sliced inverse regression (SIR) [17] and sliced average variance estimation (SAVE) [18].For a sample (z j , x j ), j = 1, . . ., n, both methods start by whitening x j , that is, by computing y j = S −1/2 0,n (x j − m n (x j )) and then group the whitened observations into K so-called slices H 1 , . . ., H K according to their response values z i .By a slight abuse of the notation from above one is then interested in the slice means m i (y) = 1 |Hi| i∈Hi y i and slice covariance matrices , where i = 1, . . ., K. SIR estimate is then obtained using the covariance of the slice means and SAVE estimate is based on the average of the differences between the slice covariance matrices and the covariance matrix of all y i which is due to the whitening I p .[19] show that SAVE is more comprehensive than SIR in detecting the subspace of interest and give situations where SIR fails.The increased flexibility of SAVE comes however at the cost of requiring more data and being more sensitive to the number of slices [19].Thus, while theoretically superior, in practice SIR is still preferred.Also combinations of SIR and SAVE are regularly investigated, see for example [20,21,22] for more details.
If the response z i is categorical, slicing is usually done by the unique values of z i while for numeric values of z i the slices are usually chosen so that they contain approximately the same number of observations.In this case the number of slices has to be chosen and it is a trade-off between trying to have many slices to find the directions and to have enough data points in the slice in order to estimate the slice statistic sufficiently well.SIR is considered quite robust with regard to the selection of number of slices while SAVE is considered quite sensitive.For example for SIR the number of slices should exceed the dimension of the subspace to be detected.
In the context of SSA one can now think of the intervals as the response, where z gets a distinct value in each different interval.Then naturally only the nonstationary components carry information about the "response".In the following sections we consider SSA methods that correspond to sliced inverse regression (SIR) [17] and sliced average variance estimation (SAVE) [18], respectively.Section 2.5 proposes a method for detecting stationarity deviations in autocorrelation as these cannot be detected by SIR and SAVE type approaches.Further, inspired by the success of hybrid methods in SDR we also consider a combined approach in Section 2.7.
Note that SDR methods like SIR and SAVE and combinations of these have also been extended to the time series settings, see [23,24].However, these methods again focus on regression modeling and not for detecting nonstationary subspaces.

Nonstationarity in mean
Consider first a SSA method that aims at detecting stationary deviations in mean.Examples of such nonstationary components include trend, seasonal and cyclic components.Assume now that x t is generated from the SSA model ( 1) and write y t for the time series standardised using (2).Let then be the covariance between the means of the different intervals weighted by the interval length.Notice that with seasonal and cyclic components, the intervals have to be chosen so that they do not correspond to the cycles.This is also illustrated in Section 2.6.
It is easy to see that the eigenvalues of M m that correspond to the components that are stationary in mean should be zero.Further, the components with non-zero eigenvalues correspond to components that are nonstationary in mean.
Hence write the eigenvalue decomposition of M m as where D m is a p × p diagonal matrix with the eigenvalues of M m as diagonal values, and includes the eigenvectors of M m arranged so that U m,n is the k m × p matrix containing the eigenvectors belonging to the non-zero eigenvalues as rows and (p − k m ) × p matrix U m,s includes the remaining ones.The columns of resulting unmixing matrices W m,n = U m,n S 0,T (x t ) −1/2 and W m,s = U m,s S 0,T (x t ) −1/2 then generate the nonstationary and stationary subspaces, respectively.Naturally k m ≤ k with equality only if for all nonstationary components the means differ at least between some of the chosen intervals.As this method is based on interval means it corresponds basically to SIR and we denote this method accordingly SSAsir.

Nonstationarity in variance
Let us then consider a SAVE type method for detecting stationary deviations in variance.Consider again standardised time series y t and let now where A 2 = AA , measure the deviation of the covariance computed on intervals T 1 , . . ., T K from the global covariance I p .
Again the eigenvalues of M v that correspond to the components that are nonstationary in variance should be non-zero.Let thus the eigenvalue decomposition be where D v is a p × p diagonal matrix with the eigenvalues of M v as diagonal values, ) are the eigenvectors of M v arranged so that U v,n is the k v × p matrix containing the eigenvectors belonging to the nonzero eigenvalues as rows and and U v,s is the (p − k v ) × p matrix containing the remaining eigenvectors as rows.
Again k v ≤ k.The transforming matrices to the two subspaces are accordingly As illustrated in Section 2.6, although this approach is designed to detect components with nonstationary variances, it may also detect components which are nonstationary in mean as if the mean changes the variances in intervals might also change.We will refer to this method in the following as SSAsave.

Nonstationarity in autocorrelation
The SIR and SAVE type methods are able to detect components which are nonstationary with respect to the first and the second moments.To detect nonstationarities in autocorrelation we need a statistic to measure the nonconstancy in autocorrelation across segments.Let such statistic be defined (for standardised time series) as where τ ≥ 1.The matrix M τ thus measures the deviation of the autocovariance matrix computed on intervals T 1 , . . ., T K from the global autocovariance matrix S τ,T .Using again the eigenvalue-eigenvector decomposition and separating the eigenvectors of M τ corresponding to non-zero and zero eigenvalues yields the k τ × p matrix U τ,n and the (p − k τ ) × p matrix U τ,s , where k τ is the number of non-zero eigenvalues with k τ ≤ k.The unmixing matrix estimates are then W τ,n = U τ,n S 0,T (x t ) −1/2 and W τ,s = U τ,s S 0,T (x t ) −1/2 .For this method to work, the different intervals are required to have different autocovariances in the intervals.As this method is aimed in detecting changes in the correlation structure we denote it as SSAcor.

Comparison of SSAsir, SSAsave and SSAcor
In this section we visualize how SSAsir, SSAsave and SSAcor work and in which situations they fail.For that purpose we plot four nonstationary time series of length 3000 which are all standardized to have mean zero and unit variance.
We consider K = 6 equal-sized intervals and compute for each slice the mean, the variance and the autocovariance, and visualize these quantities.The three quantities are illustrated in Figures 1 and 2 using dots, vertical bars and horizontal bars, respectively.The methods now detect nonstationarities when there exists variation between the means (SSAsir), when the variances differ from 1 (SSA) which means that the intervals must have different variances, and when the autocovariance differs from the global autocovariance which means that they also need to be different.By combining the values computed at each slice, we then obtain the global statistics of interest for each time series and for each method.These statistics are reported in the top-left corners of the figures and for the methods to work the values need to be non-zero.Figure 1 shows two different types of nonstationarities in mean.The left panel has a mean level shift and the right panel has a strong seasonal pattern.For the mean level shift the mean values are clearly non-constant but also the variances in the intervals differ.Thus in this case both SSAsir and SSAsave work though the statistic of SSAsir is much larger than that of SSAsave.Notice that SSAcor does not work at all in this case.In case of the seasonal time series the example shows that if the intervals are chosen badly, as is the case here, none of the methods work despite the clear nonstationarity.In A, Figure 13 shows the same examples as in this section with K = 6 intervals with unequal sizes.This demonstrates that SSAsir and SSAsave can perform well in the case of seasonal time series and the performance depends on the chosen intervals.
In the left panel of Figure 2 the time series has a constant mean but changes in variance.As expected SSAsir cannot detect this type of nonstationarity as only varying variances in the intervals are clearly visible.Again SSAcor does not work here.The right panel of Figure 2 on the other hand has a constant mean and a constant variance but the correlation structure changes twice.This is also visible in the figure where for the first time the horizontal correlation bars appear to have clearly different lengths.However, the overall measure of nonstationarity used by SSAcor is still small indicating that detection of nonstationarity in correlation is quite difficult.As shown in Figure 14 in the Appendix having intervals of unequal sizes does not give higher measure on nonstationarity for SSAcor.To conclude, there always seems to be cases where one of the methods is better than the other making approaches that combine the three methods a natural approach.

Combination of methods
The methods described above can be combined in order to detect all three types of nonstationarities.Notice first that analytic SSA (ASSA) as proposed in [13] tries to recover the unmixing matrix W so that the covariances of the first and the second moments across intervals is minimized.To be more exact, the method combines the search for components that are nonstationary in mean and variance by using the eigenvalue-eigenvector decomposition of and proceeding as in the previous sections.The main drawback here is that the changes in autocorrelation are not necessarily detected.Furthermore there are no tools to identify what kind of nonstationarities the components exhibit.
Therefore, we suggest another approach which combines the three nonstationarity measures (3), ( 4) and (5).Denote from now on M m = M 1 , M v = M 2 and M τ = M 3 .Our suggestion is to turn the problem into a joint diagonalization problem, which means that we search for an orthogonal p × p matrix U c which minimizes or, equivalently, maximizes Here ||A|| is the matrix (Frobenius) norm, diag(A) is a p × p diagonal matrix with the diagonal elements as in A and off(A) = A − diag(A).Notice that naturally only a subset of the matrices or some additional matrices M τ with various lags τ can be added in the objective function, but the principle of the estimation procedure remains the same.Several algorithms for approximate joint diagonalization in (6) exist in the literature.The most popular one based on Givens rotations is proposed in [25].
Based on U c one can now compute ) and collect all those rows of U c , where i d ij = 0, to a k c × p matrix U c,n .The rest of the rows are then collected to a (p − k c ) × p matrix U c,s .Here the individual value d i,j indicates whether the jth component is nonstationary with respect to M i .Such classification of the components is not possible when, for example, methods such as ASSA are applied.The final transformation matrices for nonstationary and stationary components are then W c,n = U c,n S 0,T (x t ) −1/2 and W c,s = U c,s S 0,T (x t ) −1/2 .In the following this method is referred as SSAcomb.

Practical issues and identifiability of components
There are several practical considerations still worth pointing out.Clearly the finite sample eigenvalues of matrices M corresponding to the stationary components will be never exactly zero.Thus the value of k must be chosen based on a cut-off value or graphically.Furthermore, it is obvious that the choice of the number of intervals K and how they are divided is crucial in this framework.The impact of the number of intervals will be illustrated in the simulation studies in Section 4. Ideally, given the number of intervals, the cut points of the intervals should be such that the quantities of interest are as different as possible.However, as finding optimal cut points is in practice difficult, the intervals should, in our opinion, be at least of different length so that possible seasonality effects are easier to detect and issues similar to those seen in Section 2.6 do not occur.
For all methods discussed above, the unmixing matrix W has the two parts W n = U n S 0,T (x t ) −1/2 and W s = U s S 0,T (x t ) −1/2 specifying nonstationary and stationary subspaces, respectively, where matrices U n and U s depend on the method used.Let now x * t = Bx t denote an affine transformation of x t with full-rank p × p matrix B. We then denote the unmixing matrix based on x * t as W * .In the BSS literature (see for example [26]) a BSS unmixing matrix is defined to be affine equivariant if Wx t = JW * x * t for some diagonal matrix J which has ±1 on its diagonal.The affine equivariance thus means that the recovered components do not depend on the mixing matrix, except for their signs.
In the proposed SSA approach, the eigenvalues of M matrices provide a way of ordering nonstationary components (or pseudo-eigenvalues in the case of SSAcomb).Thus if the eigenvalues corresponding to the nonstationarity components are unique then U n is well defined and we have W n x t = JW * n x * t .However, for the stationary part all (pseudo-)eigenvalues are equal and thus U s is not well defined and we actually have W s x t = OW * n x * t for some orthogonal matrix O.However, as for finite data the eigenvalues are usually all distinct, the finite sample version will be affine equivariant.Note however that W s x t is not necessarily equivalent to s t and W n x t is not necessarily equivalent to n t as both s t and n t have the ambiguities mentioned above and only the two subspaces are well defined.This is quite similar to a non-Gaussian subspace analysis (NGCA) [27] -a BSS method for iid data that divides data into a Gaussian part and a non-Gaussian part.See also the supplementary material of [6] for a detailed discussion.However, it is often desirable to have models where the interesting components are well defined (up to some minor identifiability issues such as their signs).In this work we do not consider a full analysis of the identifiability issues, just mention some possibilities.The stationary components are identifiable when the components follow a second-order source separation (SOS) model or a stationary independent time series model.For details see for example [5].In that case methods such as AMUSE [28,29], SOBI [30,1], gSOBI [31] or gJADE [32] can be applied to the identified stationary subspace.For the nonstationary subspace at least the following assumptions allow the estimation of the components.If all nonstationary components are independent and the marginal distributions are non-Gaussian, independent component analysis (ICA) methods can be used.For an overview of several ICA methods, see [3,4].Another possibility is to assume a nonstationary source separation (NSS) model for the nonstationary components, that is, to assume that the mean is stationary, but each component has nonstationary variances which change in different patterns.This is for example a reasonable assumption for audio data.Interestingly, most NSS methods also divide the time series into intervals and jointly diagonalize statistics based upon them.See for example [2,33] for more details on NSS methods.In this context SSAsave seems to be a natural SSA method.
So far we have assumed that the dimensions of the two subspaces are known, as it is assumed in almost all of the SSA literature mentioned above.This is naturally a very unnatural assumption.As long as inferential tools are missing to estimate k, the screeplot of the (pseudo-)eigenvalues might give some indication of the choice.In the following we apply SSAcomb based on six equal sized intervals for an eight-variate time series with T = 4000.The time series are simulated according to Setting 4 in the following section.and the screeplot shown in Figure 3 is based on their column sums.Based on the screeplot one could think of 3 or 4 nonstationary components.Taking then into account the underlying pseudo-eigenvalues it is clear that the first component is nonstationary according to M v only indicating nonstationarity in variance.The second component is nonstationarity according to M m and the third component according to M τ .The fourth component actually has small values for all three M-matrices and therefore might actually not be really nonstationary.The true value of k is indeed three in this case.

Number of components Sum of pseudo eigenvalues
Figure 3: Screeplot based on the column sums of the pseudo-eigenvalues from Table 1.

Simulations and an example
In this section we compare the methods discussed above using simulation studies.The method that combines the search for all three types of nonstationarities is also illustrated by an example.

Simulations
As pointed out above, it depends on the purpose of the analysis whether the stationary subspace or the nonstationary subspace is of interest.The goal in this simulation study is therefore to evaluate how well the different methods can estimate the two subspaces under the assumption that the dimension of the subspace is known, as it is assumed in most of the SSA references mentioned above.Thus, based on the sample eigenvalues of the methods, the unmixing matrix estimates can be decomposed into Ŵs and Ŵn with dimensions (p − k) × p and k × p, respectively.
For comparing the methods, we need a performance measure that takes into account the fact that the subspaces are only specified up to rotations.We therefore compute the projection matrices Ps = Ŵs ( Ŵ s Ŵs ) −1 Ŵ s and Pn = Ŵn ( Ŵ n Ŵn ) −1 Ŵ n and will compare them with the corresponding true projection matrices P s and P n by computing the squared distances which can take values in [0, min{k, p − k}], with zero indicating a perfect recovery of the subspace.For more details about this performance criterion, see [34,35].
All following simulations were done with R [36] using the packages JADE [37] and LDRtools [38].In all simulation settings the observed time points are 1, . . ., T , where T varies from 1000 to 32000, and p = 8 with five stationary components s 1,t , . . ., s 5,t and three nonstationary components n 1,t ,n 2,t and n 3,t .Most components are based on moving average (MA) processes, autoregressive (AR) processes and autoregressive moving average (ARMA) processes.However, inspired by [39] we consider also time varying variance (TV-VAR) processes x t with parameters α and β of the form x t = ht t where h2 t = h 2 t +αx 2 t−1 where are iid N (0, 1), x 0 = 0 and h t = 10−10 sin(βπ t/T +π/6)(1+t/T ).Similarly we consider also time varying autoregressive processes of order 1 (TV-AR1) where x t = a t x t−1 + t with x 0 = 0, t is iid N (0, σ 2 ) and a t = 0.5 cos(2π t/T ).
Thus, in Setting 1 the three nonstationary components have all nonstationary means, while in Setting 2 the means are all stationary but the variances are nonstationary.In Setting 3 the nonstationary information is in the correlation structure.Setting 4 uses then one nonstationary component from each of the previous settings meaning that here combining different approaches seems especially important.Examples of nonstationary components for Settings 1-3 with T = 2000 are shown in Figures 4-6 showing that the violations of nonstationarity can take quite different forms.In the simulation study we simulated 2000 source sets with various sample sizes T from all four settings and always used a random orthogonal mixing matrix as A. Then for all methods we computed the distances between the true projections and the estimated projections when using K = 2, 6, 12 equal-sized intervals.Notice that in the spirit of SIR, the number of slices for SSAsir should exceed k = 3 and therefore SSAsir is not expected to work when K = 2.
The average distances are shown in Figures 7-10.It is clearly visible that having only two intervals, that is K = 2, is always the poorest choice.Although this was an expected result for SSAsir, it was not so obvious for the other methods.The differences between K = 6 and K = 12 decrease with increasing sample size.However for smaller sample sizes the choice K = 6 is preferable.This confirms results familiar from NSS and SDR studies which show that it is important that the intervals contain enough observations to estimate the quantities of interest well enough.
As seen in Figure 7, in case of Setting 1 all methods are able to estimate the subspaces when there are enough intervals.This is a bit surprising result for SSAcor.From the results based on Setting 2 in Figure 8 it is clear that SSAsir and SSAcor do not work at all.ASSA is unable to detect the stationary subspace but detects the nonstationary subspace quite well.SSAcomb works well despite containing matrices which, when used by themselves, fail completely.The results based on Setting 3 in Figure 9 indicate that SSAcor and SSAcomb perform clearly best while ASSA does worse than SSAsave and is therefore affected by the poor performance of SSAsir.Finally, as seen in Figure 10, in Setting 4,  where all different types of nonstationarities are present, it is clear that SSAcomb is the only method that can estimate the subspaces properly.It requires however a much larger sample size to detect all three nonstationary components.ASSA seems to be constantly missing one nonstationary component which is not surprising as it is not looking for nonstationarity in autocorrelation structures.
Thus, to conclude, based on the simulation studies SSAcomb seems to be a preferable method as it works quite well independently of the nature of the underlying stationarity violations.However, large sample sizes are preferable and the number of intervals K plays a role in the performance.In the simulation studies above, a moderate number of K = 6 seems as a reasonable choice.

Example
We applied the proposed SSAcomb method to a magnetoencephalographic (MEG) data of length T = 221710 that was recorder using p = 102 magnetometers at the Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyväskylä.Magnetometers record brain activity indirectly by measuring changes in magnetic fields generated by electrical currents occurring in the brain.As the measurements are taken from top of the head, it is realistic to assume that the observed signals are mixtures of actual brain activity signals.The signals are also mixed with artifacts that occur due to external physiological factors such as moving the head, tensing muscles and blinking and moving eyes.Such mixing is also seen in Figure 15 in B where we plot ten first MEG-signals.The goal of the analysis is then to recover the brain activity of interest from the observed mixture.Notice that KL-SSA was applied to EEG data analysis with the similar goal in mind in [6,7].
Our interest was to see if SSAcomb can recover nonstationary components that possibly correspond to interesting brain activity or physiological signals.As the length of the data was pretty large, we applied SSAcomb with K = 12, and plotted the screeplot of the sums of the pseudo-eigenvalues in Figure 11.
Based on the screeplot we note that the method is able to detect ten nonstationary components.We then list the ten pseudo-eigenvalues in Table 2 and conclude that the components 1-5 and 8-10 were detected due to nonstationarity in variance and in autocorrelation.Component 6 has all three types of nonstationarities and component 7 is nonstationary with respect to the variance.Due to large number of observations, nonstationarities in autocorrelation are difficult to observe, but nonstationarities in mean and variance are clearly visible in Figure 12.In Figure 16 in B we show the topography plots related to each signal.The topography plots demonstrate which part of the brain is activated and help subject scientists to detect those components that are related to the artifacts and those that are related to the brain activity that is relevant to the study at hand.Notice that many MEG studies proceed by removing artifacts before trying to detect the brain activity signals of interest, see for example [40] and references therein.

Conclusion
In this paper we gave a statistical perspective on stationary subspace analysis.We showed how SSA can be formulated as a supervised dimension reduction method, and in that spirit, proposed methods that can detect nonstationarities in mean, variance, autocorrelation, and in all of them.The method thus extended the classical analytic SSA into a more general time series setting.Simulation studies illustrated that the method which combined all three nonstationarity measures outperformed its competitors in various simulation settings.The combined method seemed to perform best when the sample sizes were large and the number of intervals was moderate.Notice however that performances of different methods depend heavily on the underlying time series components and therefore it might be also of interest to weight the matrices in the combination method if some special structures are of more importance than the others.
In this study we assumed that the number of nonstationary components, k, is known.However, most often this parameter value is unknown and needs to be estimated.In [6,9] a sequential likelihood ratio test was proposed for determining the dimension of the stationary subspace in iid case.In the context of ASSA, [13] mentioned that eigenvalues can give guidance for choosing the dimension, but they did not pursue this idea any further.When using SSAsir, SSAsave, SSAcor or SSAcomb, a possible test for testing the null hypothesis H 0 : k = k 0 , where k is the true dimension of the  nonstationary subspace and k 0 is the proposed dimension, could also be based on the eigenvalues of matrices M and resampling-based procedures in a similar fashion as in [41] for example.An estimate for the nonstationary subspace dimension could then be based on the sequential testing as in [42].We leave this for a future work.In this context we will also investigate if the detection of the type of nonstationarity can be formalized and how the SSA methods can help to detect change points in the multivariate time series.q q q q q q q q q q q q qq q q qq q qq qqqq qqqqqq qqqqq qqqqq qqqqqqq qqqqqqqq qqqqqqqq qqqqqqqqqqq qqqqqqqqqqqqqqqq qqqqqqqqqqq

Figure 1 :
Figure 1: Visualization of SSAsir, SSAsave and SSAcor for a component with nonstationarity in mean (left panel trend, right panel seasonality).The black dots represent the interval mean, the height of vertical red bar the interval variance and the width of blue horizontal bar the interval autocovariance.

−−Figure 2 :
Figure 2: Visualization of SSAsir, SSAsave and SSAcor for a component with nonstationarity variance (left panel) and for a component with nonstationary correlation structure (right panel).The black dots represent the interval mean, the height of vertical red bar the interval variance and the width of blue horizontal bar the interval autocovariance.

Figure 4 :
Figure 4: Example of the nonstationary components in Setting 1 for T = 2000.

Figure 5 :Figure 6 :
Figure 5: Example of the nonstationary components in Setting 2 for T = 2000.

Figure 7 :
Figure 7: Average squared distances between the true and estimated subspaces in Setting 1.

Figure 12
Figure 12 plots the ten nonstationary components recovered by SSAcomb.

Figure 8 :
Figure 8: Average squared distances between the true and estimated subspaces in Setting 2. d

Figure 9 :
Figure 9: Average squared distances between the true and estimated subspaces in Setting 3.

Figure 11 :
Figure 11: Screeplot of the sums of the pseudo-eigenvalues based on MEG data.

Table 1 :
Pseudo-eigenvalues of SSAcomb for a simulated time series.

Table 2 :
Pseudo-eigenvalues of SSAcomb based on the MEG data.