Cramér-Von Mises Statistic for Repeated Measures

The Cramér-von Mises criterion is employed to compare whether the marginal distribution functions of a k-dimensional random variable are equal or not. The well-known Donsker invariance principle and the KarhunenLoéve expansion is used in order to derive its asymptotic distribution. Two different resampling plans (one based on permutations and the other one based on the general bootstrap algorithm, gBA) are also considered to approximate its distribution. The practical behaviour of the proposed test is studied from a Monte Carlo simulation study. The statistical power of the test based on the Cramér-von Mises criterion is competitive when the underlying distributions are different in location and is clearly better than the Friedman one when the sole difference among the involved distributions is the spread or the shape. Both resampling plans lead to similar results although the gBA avoids the usual required interchangeability assumption. Finally, the method is applied on the study of the evolution inequality incomes distribution between some European countries along the years 2000 and 2011.


Introduction
The comparison of the equality among the marginal distribution functions of a k-dimensional random variable is a common problem in statistical inference (for example, in biomedicine, in problems of comparing diagnostic procedures or bioequivalence (Freitag, Czado & Munk 2007).In practice, most frequent cases are the study of one feature measured on the same subjects at different time moments (analysis of repeated measures) and matched studies.Despite of, there exists a number of methods of comparing the equality among k-distributions from independent samples, the k-sample problem for dependent data has not been as widely studied and, the traditional parametric (ANOVA) and nonparametric (Friedman test) repeated measures procedures are the usual used techniques to solve these problems.
In this context, several rank tests have been proposed.In a non exhaustive revision: Ciba-Geigy & Olsson (1982) developed a specific one for comparing dispersion in paired samples design; Lam & Longnecker (1983) introduced modifications which improve the power of the classical Wilcoxon rank sum test for this topic; Munzel (1999a) used the normalized version of distribution functions to derive an asymptotic theory for rank statistics including ties and considered a mixed model which permits almost arbitrary dependences; Munzel (1999b) studied different nonparametric permutation methods for repeated measures problems in a two sample framework; most recently, Freitag et al. (2007) proposed a test based on the Mallows distance with this goal.Other authors as Govindarajulu (1995) Govindarajulu (1997) or Podgor & Gastwirth (1996) also dealt with this topic from different approaches.
Although the use of bootstrap on multivariate problems is straightforward in order to build confidence intervals and related estimates, the way to resampling under the null (in particular, the way to involve this assumption on the resampling) for preserving the original data structure is not direct and the use of bootstrap on hypothesis testing (which involve paired design) is not so clear.It is not trivial how to involve the (null) hypothesis of equality of the k marginal distributions of a multivariate random variable.The most common procedure, the permutation test (see, for example, Good 2000, Munzel 1999b), implies that the different components of the k-dimensional random vector must be interchangeable (see Venkatraman & Begg 1996or, most recently, Nelsen 2007).Under the null, this is not a very strong assumption to compare two samples (most of the previous cited works engage, exclusively, on this particular case) but, for three or more samples it means that the relationship between each pair must be the same (it is also known as sphericity hypothesis) and, in spite of for most of the usual statistics, in practice, the permutation test has demonstrated its robustness with respect to this assumption, it is usually violated.
In this paper, the authors deal with the problem of comparing the equality among the k marginal distribution functions from a typical multivariate problem.With this goal, the traditional Cramér-von Mises criterion is considered.The Donsker invariance principle and the classical Gaussian processes theory, in particular, the Karhunen-Loève expansion, are used in order to obtain (a not explicit version of) the asymptotic distribution for the Cramér-von Mises statistic when the samples are from the same subjects.The properties of this statistic allow to develop a resampling procedure which does not need the (usual) interchangeability (or sphericity) assumption.This method is described and its consistency is proved.We think it is worth mentioning that, the considered procedures (the asymptotic, permutation and the bootstrap ones), are simple, useful and easily to implement.A simulation study is carried out (Section 3); its results suggest that the Cramér-von Mises criterion obtains good results in all considered situations and it is clearly better than the Friedman test when distributions differ mainly in their spread or shape.These results are the usual ones when the Cramér-von Mises criterion is used in other context (see, for example, Martínez-Camblor & Uña-Álvarez (2009) or Martínez-Camblor (2011)).Finally, the proposed method is applied on the study of the inequality incomes between thirty European countries during the years 2000 and 2011 (Section 4).
During the revision process of this paper, it has been published the work of Quessy & Éthier (2012) (QE) which, from a slight different approach, deals with the same problem.The main results of the present manuscript had been developed around 2008-2009 and, of course, independently of the previously cited work.In order to keep this independence and, in spite of several reported results are overlapping with the obtained by QE, we have maintained them in the appendix.

Cramér-von Mises Statistic for Repeated Measures
The well-known Cramér-von Mises criterion introduced, separately by Harald Crámer and Richard Edler von Mises (Cramér 1928, Von Mises 1991), was originally to compare the goodness of fit of a probability distribution F * and a fixed distribution function F 0 and is given by In the immediately one-sample applications, F 0 is the theoretical cumulative distribution function (CDF) and F * is the empirical cumulative distribution function (ECDF), Fn .Csőrgő & Faraway (1996) derived the exact distribution for this statistic and proposed a correction for its asymptotic distribution.Anderson (1962) derived the asymptotic distribution for the two-sample case.A standard k-dimensional generalization was proposed by Kiefer (1959) which considered the expression where N = k i=1 n i and Fni (X i , t) and Fn (X, t) are the ECDF referred to the ith sample (1 ≤ i ≤ k) and to the pooled sample, respectively.Brown (1982) also dealt with the k-sample problem, he studied the asymptotic distribution and introduced a permutation test based on the same criterion.In Martínez-Camblor & Uña-Álvarez (2009), the statistical power of this statistic was considered in a simulation study (joint with other six statistics based on the ECDF and four more based on the kernel density estimator).The W 2 k test obtained very competitive results in the eight considered models (four symmetrical and four asymmetrical).
In this section we study the different approximations for the distribution of W 2 k when data are from a multivariate variable i.e., in our case, we have a k-dimensional random sample X = (X 1 , . . ., X k ) with denote the vectors with the ECDFs and the theoretical cumulative distribution functions (CDFs), respectively.In Theorem 1, it is proved (we must remark; a non explicit version of) the asymptotic distribution for the statistic where Fn,i (X i , t) (1 ≤ i ≤ k) is the ECDF referred to the ith sample and Fn,• (X, t) = k −1 k i=1 Fn,i (X i , t), when the (null) hypothesis is true.
Theorem 1.Let ξ be a k-dimensional random vector and let X be a random sample from ξ (with size n), by using the above notation, if where {M l = (M 1,l , . . ., M k,l )} l∈N is a sequence of k-dimensional, normal distributed random variables whose marginals follow a N(0, 1) distribution and {{λ i,l } k i=1 } l∈N are non negative constants satisfying l∈N λ 2 i,l < ∞ for 1 ≤ i ≤ k.
The above Theorem guarantees the consistency and gives the convergence rate for the studied statistic.However, strictly speaking, this result does not provide its distribution in full.In order to build asymptotic critical regions, the explicit values for the {{λ i,l } k i=1 } l∈N coefficients must been known (eigenvalues and eigenfunctions must be computed).However, we want to note that this is a non-trivial problem which involves complex (and sometimes, for some readers, cumbersome analysis see, for example, Deheuvels 2005).In addition, these eigenvalues depend on the covariance data structure and they should be computed particularly for each problem.The following remark is devoted to point out some comments about the eigenvalues calculation in the two-sample case.
Note 1.In the two sample-case, the asymptotic distribution of W 2 k (n) under the null is equivalent to the distribution of W 2 = (W 1 {t} − W 2 {t}) 2 dt, where W i {t} (i ∈ 1, 2) is a standard Brownian bridge.Eigenvalues and eigenfunctions are the non zero solutions to the Fredholm type integral equation with the above restrictions on the eigenfunctions, e j (i.e.orthonormality).In this particular setting, Obviously, the particular solutions depend on the function f .For instance, assuming f (u, v) + f (v, u) = u ∧ v − uv, functions sin(jπu) and cos(jπu) (j ∈ N) are possible solutions which lead to eigenvalues in the form λ j = (jπ) −2 (see, for instance, Van der Vaart, 1998).
Usually, in order to approximate the asymptotic distribution, the largest eigenvalue is taken and the other ones are ignored i.e., by using the coefficients properties (see the Theorem 1 proof in the Appendix, in particular, equation ( 8)), it is obtained the approximation Unfortunately, the first eigenvalue is also unknown.However, for each i ∈ 1, . . ., k, we can approximate the first (and, therefore, the biggest) eigenvalue by Note that it is known that λi,1 ≥ λ i,1 (i ∈ 1, . . ., k) and the equality is true only when λ i,l = 0 ∀l > 1.Finally, in order to save the relationship among the different involved samples, we build We can work, without loss of generality, with With some additional computes and taking into account that, for then, and the asymptotic distribution can be approximated by We compute λi,1 (1 ≤ i ≤ k) by using the estimation of some parameters of the statistic and, unfortunately, from this method we cannot estimate any other eigenvalue.In the independent case, the quality of this approximation has been checked via simulations (see, for instance, Martínez-Camblor, Carelos & Corral (2012), and references therein).
Note that both the expected value and the variance of C A are equal to the W 2 k (n) ones.The (theoretical) unknown parameters which are involved in the equation ( 4) can be estimated by putting the respective ECDFs instead of the theoretical ones (typical plug-in method) in their explicit expressions (equations ( 2) and ( 3)).At this point, it is worth to remember that, under the null hypothesis, all the marginal functions are equal.Once these values are computed, the asymptotic distribution under the null might be approximated by using some bound for the quadratic forms (see, for example, Alkarni & Siddiqui 2001) or by using the Monte Carlo method: Generating T independent samples (with the original sample size) from the k-dimensional normal distribution (previously we must compute its correlation matrix by using the corresponding equations) and computing the respective T asymptotic values of the statistic by using (4).In Section 3, the latter possibility is employed in the simulation study.
On the other hand, the Cramér-von Mises statistic properties allow to propose an useful resampling plan in order to approximate its distribution for paired samples in small size problems.The following subsection is devoted to develop a bootstrap approximation in the current context.

Bootstrap Approximation
The bootstrap, introduced and explored in detail by Bradley Efron (Efron 1979, Efron 1982), is an (not only but mainly nonparametric) intensive computer-based method of statistical inference which is often used in order to solve many real questions without the need of knowing the underlying mathematical formulas.Besides, under regularity conditions, the distribution bootstrap estimation is asymptotically minimax among all possible estimates (Beran 1982).
Despite of the bootstrap method has received a great deal of attention and popularity, its use on statistical hypothesis testing has received considerable, although minor attention (Martin 2007).Following Hall & Wilson (1991), many authors such as Westfall & Young (1993) have promoted null resampling as critical to the proper construction of bootstrap tests.However, in related sample distribution comparison, it is not straightforward how to resample under the null and the permutation tests (Good 2000) are the usual ones employed with this goal.In order to guarantee the consistency of the last method, exchangeability among the different components must be assumed (Venkatraman & Begg 1996) and, although for most of the statistics, in practice, this technique has proved its robustness with respect to this assumption, in k-dimensional problems (k > 2) it is usually violated.Recently, Martínez-Camblor et al. ( 2012) proposed a general resampling plan which focus its use on hypothesis testing without the need of assuming additional conditions.In particular, for the present problem, under the null, it is easy to prove that: Theorem 2. Under the Theorem 1 assumptions and by using the same notation.Let X * = (X * 1 , . . ., X * k ) be an independent random sample generated from F n (X, •) (multivariate ECDF referred to the random sample X).If where Under the null, it is held, where P X denotes probability conditionally on sample X.
The above result proves the punctual convergence (for each, fixed, u ∈ R) of the bootstrap method.Uniform convergence can also be derived (under mild and usual conditions) from general theory of U and V statistics (see, for example, Arcones & Gine 1992).Theorem 2 guarantees that the distribution of W 2 k (n) can be approximated by the W 2, * k (n) one and, as usual, this distribution can be approximated by using the Monte Carlo method following the algorithm: B 2 .From the multivariate cumulative empirical distribution function, F n (X, t), draw B independent k-dimensional random samples with size n, , the final P -value is given by The main difference between this algorithm and the classical bootstrap is that, in the proposed method, the null hypothesis (and only the null hyporthesis) is used in order to compute the statistic (bootstrap) values instead of to be used to draw the bootstrap samples.We do not resampling from the null and this fact, allows to preserve the original data structure.
Permutation method is based on the idea that within the same subject, each value can be located in any position.For this claim, not only the null must be true but the interchangeability it also must be hold.Although, in practice, the permutation method has proved its robustness for a wide variety of statistics, let us to go to an extreme.We consider a three-sample problem (sample size n) where the first and second variables are the same and the third one is independent from the other two.In this setting it is derived the equality It is obvious that the value of the difference between the Fn,1 and Fn,3 has not the same weigth in the three summands.However the permutation algorithm assumes that the summands have the same distribution, in particular the same expected value.Table 1 depicts the means of S n,i (labelled as Sn,i ) for i ∈ 1, 2, 3 in 2,000 Monte Carlo (MC) simulations and when the permutation (P) and the proposed bootstrap (B) are used.The observed rejection proportion (α = 0.05) and the value of W 2 k (n) are also included.The underlying distributions are uniforms on [0, 1] and n = 50.
Although the W 2 k (n) is, in general, well estimated by the permutation method (the mean is similar to the expected one), the total value is evenly distributed among the three summands.In spite of for the W 2 k (n) the results are, in general, good (the observed rejection proportion is too big, but this is an extreme and bit realistic problem), permutation method does not reflect the data structure and this fact can drive to mistake when different weighting are considered for the involved summands or, for instance, in presence of different missing data frameworks.In summary, the correct performance of the permutation method can not be guaranteed in absence of the exchangeability hyptothesis.

Simulation Study
In order to investigate the practical behaviour of the proposed methodology, a Monte Carlo simulation study has been carried out.We estimate the statistical power (α = 0.05) from 2,000 Monte Carlo replications for different problems.For the Cramér-von Mises test, asymptotic approximation, C A (the P -value is approximated from 499 Monte Carlo replications following the approximation given in (4)), bootstrap approximation, C B (B = 499 in algorithm B 1 -B 4 ) and the permutation method, C P (the P -value is also approximated from T = 499 replications) are considered.Although the number of random combinations is small to obtain a good estimation for a particular P -value, it is enough to obtain a good estimation for the statistical power.Note that, here, we are not interested in the result for each particular problem but in the final rejection proportion.The classical non-parametric Friedman (F R ) test is also included as the reference one.Let be Z = (Z 1 , Z 2 , Z 3 ) a three dimensional random vector from a N 3 (0, Σ) distribution where 0 = (0, 0, 0) and the components of the covariance matrix are σ i,j = 1 if i = j and σ 1,2 = σ 1,3 = 1/4 and σ 2,3 = b (cases b = 1/4 and b = 3/4 are considered) and let be N i , 1 ≤ i ≤ 4, independent random variables with standard normal distribution.A three dimensional random sample with size n, X = (X 1 , X 2 , X 3 ), is drawn from the following symmetrical models (MD): The following asymmetrical models are also considered: Where Figure 1: Density functions for the different considered models.
Table 2 shows the observed rejection proportions for type I (symmetrical) model for two different sample sizes (n = 25, 50). Figure 2 depicts the observed statistical power for the type I models against sample size (sample sizes of 10, 25, 40, 50, 65 and 75 were considered).Despite of the rejected observed percentages are bit larger than the expected ones (in special for the C P approximation for b = 3/4) the nominal level is, in general, well respected.On the other hand, the Cramér-von Mises test obtains better results than the Friedman one even when the difference among the distributions is only in the position parameter while the variance-covariance structure is the same.Approximations by permutation and bootstrap obtain quite similar results, although the permutation one is a bit better for σ 2,3 = 3/4.The asymptotic approximation obtains worst results for small sample sizes but quite similar than the other ones for middle sample sizes (n > 40).
Table 3 and Figure 3 are analogous to Table 2 and Figure 2, respectively, for type II (asymmetrical) models.The nominal level is well respected in all considered cases.For type II models (Figure 3) the Cramér-von Mises criterion is still the best when the main difference is not the location parameter (model 1-II).When the location parameter is the main difference among the curves, the Friedman test is the best one in model 3-II and the Cramér-von Mises test is the best for model 2-II, although both tests obtain quite similar results.In this scheme the approximation to the asymptotic distribution for the Cramér-von Mises test is slow and, in general, its results are not competitive for n ≤ 50.

Inequality Incomes Analysis
In order to illustrate the practical performance of the proposed method we considered the study of the inequality incomes between thirty European countries.The inequality measure is a complex problem which has been addressed from different approaches (see Cowel (2009) and references therein).Although the Gini index is, probably, the most popular measure of inequality, other approaches have also been considered (see, for instance, Martínez-Camblor 2007).Our objective is to study the (possible) changes in the income distribution inequalities in Europe.With this goal the GDP per capita in PPS (quote from the site http://epp.eurostat.ec.europa.eu/tgm/table.do?tab=table&plugin=1&language= en&pcode=tec00114: Gross Domestic Product (GDP) is a measure for the economic activity.It is defined as the value of all goods and services produced less the value of any goods or services used in their creation.The volume index of GDP per capita in Purchasing Power Standards (PPS) is expressed in relation to the European Union (EU-27) average set to equal 100.If the index of a country is higher than 100, this country's level of GDP per head is higher than the EU average and vice versa.Basic figures are expressed in PPS, i.e. a common currency that eliminates the differences in price levels between countries allowing meaningful volume comparisons of GDP between countries.Please note that the index, calculated from PPS figures and expressed with respect to EU27 = 100, is intended for cross-country comparisons rather than for temporal comparisons.) in thirty European countries in the years 2000 and 2011 have been collected (downloaded from the above website).Due to our objective is not to study the incomes distribution but the inequalities of these incomes, we have considered the relative GDP per capita in PPS distribution i.e., the considered variable are 100 times the original values divided by the European Union one (considering the currently twenty-seven countries members) and the particular mean has been sustracted.Figure 4 depicts the empirical cummulative distribution function (ECDF) and the density estimation function for the considered GDP transformations.
The value of the Cramér-von Mises statistics between these two distributions was 0.171.The approximate P-values were 0.012, 0.005 and 0.001 from the asymptotic, bootstrap and permutation algorithms (based on 10,000 replications), respectively.All of them reject the null and it can be concluded the inequality of the incomes does not be equal in 2000 and 2011.The Gini indices were 0.251 and 0.220, respectively.

Main Conclusions
The Cramér-von Mises criterion is widely used in order to compare cumulative distribution functions.Despite of different situations have been considered, independent k-sample comparison is the most studied problem.We propose the use of this criterion in a typical k-related sample design.By using the Donsker invariance principle and the Karhunen-Loève decomposition for stochastic Gaussian processes its asymptotic distribution is developed.Although its explicit asymptotic distribution is still unknown, the obtained results allow to develop an useful approximation.As usual, we also explore two different resampling approximations: the classical and well-known permutation test and the most recent general bootstrap algorithm (gBA).
For independent samples, the Cramér-von Mises statistic is underlying distribution-free, its distribution does not depend on the distribution function where the samples were drawn, and it can be tabulated in order to obtain the P-value for a particular problem.In a paired sample design, the statistic distribution depends on the relationships among the involved variables; this relationship always must be estimated from the sample (therefore, universal eigenvalues do not exist for this topic), increasing the necessary time to compute the given asymptotic approximation.This is the main handicap for using the asymptotic approximation which, in general, obtains good results for moderate sample sizes.
A general bootstrap algorithm (gBA) and the usual permutation method are also studied.The considered bootstrap procedure exploits a particular pivotal function and introduce the null hypothesis at the moment of computing the value of the (bootstrap) statistic instead of in the random bootstrap samples generation process.The main advantage is that the data structure is preserved and no additional assumptions (only the null) are required.Some details of its consistency are also reported, however reader is referred to Martínez-Camblor et al. (2012) for more details.This technique has already been used with success in a paired sample extension of the AC-statistic (Martínez-Camblor 2010) and in inference on a particular ecological diversity index (Martínez-Camblor, Corral & Vicente 2011).In an extreme example is showed how the permutation method can lead to mistakes when the interchangeability assumption is violated, which is the usual situation when k > 2. However, the observed statistical power in our simulation study is similar for the three different considered methods.We must remark that the asymptotic and the bootstrap method avoid the exchangeability assumption (Von Mises 1991, Nelsen 2007) and do not increase the methodology complexity.
As in the independent case (see, for example Martínez-Camblor & Uña-Álvarez 2009), the simulation study results suggest that the Cramér-von Mises criterion is clearly better than the (classical) Friedman test when the main difference among the curves is not the location parameter and it obtains very good results otherwise.On the other hand, the proposed asymptotic approximation obtains similar statistical power than the ones based on resampling for moderate sample sizes.Relevant differences between two considered variance-covariance matrix structures (b = 1/4 and b = 3/4) have not been observed.
We think that the considered practical case is specially good in order to illustrate the use of the proposed methodology.When the focus is not the location parameter but the shape, which is the case of the inequality, the Cramér-von Mises statistics conventionally obtains good powers in order to check the equality of the involved distribution functions.In this context, traditional text like Friedman or the Student T-test do not work but the Cramér-von Mises criterion has proved that is a poweful test and a valuable tool for this kind of goals.
Part of the results provided in this manuscript are overlapped with the ones obtained in Quessy & Éthier (2012).However, the present work has been developed independent and previously to the publication of the Quessy and Éthier one.The main differences between the works are: (a) Our approach is more practical and, from our point of view more easy to understand for non probabilistic readers.
(b) The permutation method is considered and discussed.A pathological case where this method fails has been provided.
(c) A practical use of the gBA and a simulation study where the quality of the provided approximations can be checked.
(d) The considered practical problem illustrates a situation where the equality among the location parameters is not the hypothesis to be tested.
Furthermore, for u, v ∈ R and 1 Under the null given in (1) and if for each 1 ≤ i, j ≤ k and for u, v ∈ R we define the functions: F we can obtain the following result, Theorem 3. By using the Lemma 1 notation, if where {M l = (M 1,l , . . ., M k,l )} l∈N is a sequence of k-dimensional, normal distributed random variables whose marginals follow a N(0, 1) distribution and and taking into account the well-known convergence sup t∈R { Fn (X, t) − F (t)} −→ n 0 (a.s.).It is easy to see that is a centred k-dimensional Gaussian process.Moreover, if for t ∈ R, t = (t, . . ., t) and Σ(t) stands for the matrix defined in (2), for symmetry and under the null, for i ∈ 1, . . ., k it is obtained, where for 1 In addition, it is had the covariance and it is easy to check that, for i ∈ 1, . . ., k, This property allows to apply the Karhunen-Loève decomposition (see, for example, Adler; 1990) in order to obtain the representation where (for each i ∈ 1, . . ., k) {e i,l (t)} l∈N is a convergent orthonormal sequence (also known as eigenfunctions) i.e., e i,p (t)e i,q (t)dF )} l∈N are k-dimensional random vectors which marginal distributions follow a N(0, 1) and, for 1 ≤ i ≤ k {{λ i,j } k i=1 } j∈N , are non negative constants (also known as eigenvalues) satisfying From (7), it is straightforward that, for i ∈ 1, . . ., k, Therefore, from the Fubini Theorem, for i ∈ 1, . . ., k, Now, we are interested in studying the joint distribution of M l (for each fixed l ∈ N).We will prove that k i=1 a i M i,l follows a normal distribution for each a 1 , • • • , a k ∈ R and for l ∈ N. Note that, for each (fixed) l ∈ N, we have that is a k-dimensional centred Gaussian process.From (7), for each i ∈ 1, . . ., k a i e i,l (t) j∈N λ i,l e i,j (t) M i,j = a i X ( ) ( ) , ( ) hence, a i j∈N λ i,j M i,j e i,l (t) e i,j (t)dF (t) = a i X ( ) ( ) , ( ) F( ) We can assume that λ i,l = 0 for 1 ≤ i ≤ k (if some λ i,l = 0 (1 ≤ i ≤ k), M i,l does not interfere in any definition and we would have freedom to select it independently with the other ones) hence, from the eigenfunctions properties From ( 6), ( 7) and (8) we know that there exists a sequence of k-dimensional normal distributed random variables, {M j } j∈N whose marginals follow a N(0, 1) distribution and non negative constants {{λ i,j } k i=1 } j∈N satisfying ( 9) and ( 10), such that and the proof is concluded.
where P X denotes probability conditionally on sample X.
Proof .It is easy to check that, And, directly from the Lemma 1, Of course, the above equation is equivalent to the one in (6).Also from the Lemma 1 and classical theory of stochastic processes (in particular, Horváth and Steinebach (1999) proved that the expressions sup t∈R | Fn (X, t) − F (t)| and sup t∈R | F * n (X * , t) − Fn (X, t)| where X * is an independent random sample with size n generated from Fn (X, •) (ECDF referred to the random sample X which sample size is n) have the same asymptotic distribution), for each u ∈ R, it is had the convergence, where Y Fn,i (1 ≤ i ≤ k) and Ȳ Fn,• are the processes which appear in the Lemma 2.1 and, for i ∈ 1, . . ., k, Due, under the null hypothesis, ∀t ∈ R, it is had the convergence F n (X, t) −→ n F (t) (a.s.), for each u ∈ R, it is directly derived that P X W ∈, * (\) ≤ − P W ∈ (\) ≤ −→ n 0 a.s.
a = 3/4 and M = (1 − b) * X + b * Y denotes a mixture which takes values on X with probability (1 − b) and on Y otherwise.A graphical representation for the respective density functions is shown in Figure 1.

Figure 2 :
Figure 2: Observed rejection probabilities (α = 0.05) for the three different considered approximations of the Crámer-von Mises statistic distribution and the Friedman test against sample size for the symmetrical (type I) models.

Figure 3 :
Figure 3: Observed rejection probabilities (α = 0.05) for the three different considered approximations of the Crámer-von Mises statistic distribution and the Friedman test against sample size for the asymmetrical (type II) models.

Figure 4 :
Figure 4: Upper, distribution (left) and density (right) estimation functions for the relative GDP per capita in PPS in thirty European countries in the years 2000 (black) and 2011 (gray).Lower, bivariate density estimation for the GDP per capita in PPS in years 2000 and 2011.

Table 1 :
Means for the three summands and for W 2 k (n) in the problem above described for the Monte Carlo approximation (MC) (2,000 iterations) and for the Bootstrap and Permutation methods.Observed rejection percentages (α = 0.05) are also included.

Table 2 :
Observed rejection probabilities for type I (symmetrical ones) models.The nominal level is α = 0.05.