On the Performance Evaluation of Different Measures of Association

In this article our objective is to evaluate the performance of different measures of associations for hypothesis testing purposes. We have considered different measures of association (including some commonly used) in this study, one of which is parametric and others are non-parametric including three proposed modifications. Performance of these tests are compared under different symmetric, skewed and contaminated probability distributions that include Normal, Cauchy, Uniform, Laplace, Lognormal, Exponential, Weibull, Gamma, t, Chi-square, Half Normal, Mixed Weibull and Mixed Normal. Performances of these tests are measured in terms of power. We have suggested appropriate tests which may perform better under different situations based on their efficiency grading(s). It is expected that researchers will find these results useful in decision making.


Introduction
It is indispensable to apply statistical tests in almost all the observational and experimental studies in the fields of agriculture, business, biology, engineering etc. These tests help the researchers to reach at the valid conclusions of their studies. There are number of statistical testing methods in literature meant for different objectives, for example some are designed for association dispersion, proportion and location parameter(s). Each method has a specific objective with a particular frame of application. When more than one method qualifies for a given situation, then choosing the most suitable one is of great importance and needs extreme caution. This mostly depends on the properties of the competing methods for that particular situation. From a statistical viewpoint, power is considered as an appropriate criterion of selecting the finest method out of many possible ones. In this paper our concern is with the methods developed for measuring and testing the association between the variables of interest defined on a some population(s). For the sake of simplicity we restrict ourselves with the environment of two correlated variables i.e. the case of bivariate population(s).
The general procedural framework can be laid down as follows: Let we have two correlated random variables of interest X and Y defined on a bivariate population with their association parameter denoted by ρ. To test the hypothesis H 0 : ρ = 0 (i.e. no association) vs. H 1 : ρ = 0, we have a number of statistical methods available depending upon the assumption(s) regarding the parent distribution(s). In parametric environment the usual Pearson correlation coefficient is the most frequent choice (cf. Daniel 1990) while in non parametric environment we have many options. To refer the most common of these: Spearman rank correlation coefficient introduced by Spearman (1904); Kendall's tau coefficient proposed by Kendall (1938); a modified form of Spearman rank correlation coefficient which is known as modified rank correlation coefficient proposed by Zimmerman (1994); three Gini's coefficients based measures of association given by Yitzhaki (2003) (two of which are asymmetrical measures and one is symmetrical). We shall refer all the aforementioned measures with the help of notations given in Table 1 throughout this chapter.
This study is planned to investigate the performance of different measures of association under different distributional environments. The association measures covered in the study include some existing and some proposed modifications and performance is measured in terms of power under different probability models. The organization of the rest of the article is as: Section 2 provides description of different existing measures of association; Section 3 proposes some modified measures of association; Section 4 deals with performance evaluations of these measures; Section 5 offers a comparative analysis of these measures; Section 6 includes an illustrative example; Section 7 provides summary and conclusions of the study. Gini Correlation Coefficient between X and Y (asymmetric) (cf. Yitzhaki 2003) r g2 Gini Correlation Coefficient between Y and X (asymmetric) (cf. Yitzhaki 2003) r g3 Gini Correlation Coefficient between X and Y or between Y and X (symmetric) (cf. Yitzhaki 2003) τ Kendall's Tau (cf. Kendall 1938)

Measures of Association
In order to define and describe the above mentioned measures, let we have two dependent random samples in the form of pairs (x 1 , y 1 ), (x 2 , y 2 ), . . . , (x n , y n ) drawn from a bivariate population (with the association parameter ρ) under all the assumptions needed for a valid application of all the association measures under consideration. The description of the above mentioned measures along with their main features and their respective test statistics are provided below: Pearson Product Moment Correlation Coefficient (r P ): It is a measure of the relative strength of the linear relationship between two numerical variables of interest X and Y . The mathematical definition for this measure (denoted by r P ) is given as: where cov(X, Y ) refers to the covariance between X and Y ; SD(X) and SD(Y ) are the standard deviations of X and Y respectively.
The value of r P ranges from −1 to +1 implying perfect negative and positive correlation respectively. A value of zero for r P means that there is no linear correlation between X and Y . It requires the data on at least interval scale of measurement. It is a symmetric measure that is invariant of the changes in location and scale. Geometrically it is defined as the cosine of the angle between the two regression lines (Y on X and X on Y ). It is not robust to the presence of outliers in the data. To test the statistical significance of r P we may use the usual t-test (under normality) and even under non-normality t-test may be a safe approximation.
Spearman Rank Correlation Coefficient (r S ): It is defined as the Pearson product moment correlation coefficient between the ranked information of X and Y rather than their raw scores. The mathematical definition for this measure (denoted by r S ) is given as: Revista Colombiana de Estadística 37 (2014) 1-24 Electronic copy available at: https://ssrn.com/abstract=3534582 where n is the sample size; n i=1 D 2 i is the sum of the squares of the differences between the ranks of two samples after ranking the samples individually. It is a non-parametric measure that lies between −1 to +1 (both inclusive) referring to perfect negative and positive correlations respectively. The sign of r S indicates the direction of relationship between the actual variables of interest. A value of zero for r S means that there is no interdependency between the original variables. It requires the data on at least ordinal scale. Using normal approximation, the statistical significance of r S may tested using the usual t-test. Modified Rank Correlation Coefficient (r M ): It is a modified version of Spearman rank correlation coefficient based on transformations of X and Y into standard scores and then using the concept of ranking.The mathematical definition for this measure (denoted by r M ) is given as: where d is the difference between the ranks assigned transforming the values of X and Y separately into standard scores, assigning the ranks to standard scores collectively and then make separate groups of the ranks according to their respective random samples. Now defines the difference between the ranks and (3) is the sum of the squares of the differences between the ranks.
It is also a non-parametric measure that may take zero value for no correlation, positive value and negative values for negative and positive correlations respectively, as in the above case. A value of −1 refers to the perfect correlations among the variables of interest.
Gini Correlation Coefficient (Asymmetric and Symmetric): These correlation measures are based on the covariance measures between the original variables X and Y and their cumulative distribution functions F X (X) and F Y (Y ). We consider here three measures of association based on Gini's coefficients (two of which are asymmetrical measures and one is symmetrical). These measures of association, denoted by r g1 , r g2 and r g3 , are defined as: where cov(X, F Y (Y )) is the covariance between X and cumulative distribution function of Y ; cov(Y, F X (X)) is the covariance between X and its cumulative distribution function; cov(Y, F X (X)) is the covariance between Y and cumulative distribution function of X; cov(Y, F Y (Y )) is the covariance between Y and its cumulative distribution function; G X = 4cov(X, F X (X)) and G Y = 4cov(Y, F Y (Y )).
In the above mentioned measures given in (4)-(6), r g1 and r g2 are the asymmetric Gini correlation coefficients while r g3 is the symmetric Gini correlation coefficient. Here are some properties of Gini correlation coefficients (cf. Yitzhaki 2003): The Gini coefficient is bounded, such that +1 ≥ r gjs ≥ −1(j, s = X, Y ). If X and Y are independent then; r g1 = r g2 = 0; r g2 is not sensitive to a monotonic transformation of Y . In general, r gjs need not be equal to r gsj and they may even have different signs. If the random variables Z j and Z s are exchangeable up to a linear transformation, then r gjs = r gsj .
Kendall's Tau (τ ): It is a measure of the association between two measured variables of interests X and Y . It is defined as the rank correlation based on the similarity orderings of the data with ranked setup. The mathematical definition for this measure (denoted by τ ) is given as: where n is the size of sample and S is defined as the difference between the number of pairs in natural and reverse natural orders. We may define S more precisely as arranging the observations (X i , Y i ) (where i = 1, 2, . . . , n) in a column according to the magnitude of the X s, with the smallest X first, the second smallest second and so on. Then we say that the X s are in natural order. Now in equation (7), S is equal to P − Q, where P is the number of pairs in natural order and Q is number of pairs in reverse order of random variable Y .
This measure is non-parametric being free from the parent distribution. It takes values between +1 and −1 (both inclusive). A value equal to zero indicates no correlation, +1 means perfect positive and −1 means perfect negative correlation. It requires the data on at least ordinal scale. Under independence its mean is zero and variance 2(2n + 5)/9n(n − 1).

Proposed Modifications
Taking the motivations from the aforementioned measures as given in equation (1)-(7) we suggest here three modified proposals to measure association. In order to define r M in equation (3), Zimmerman (1994) used mean as an estimate of the location parameter to convert the variables into standard scores. Mean as a measure of location is able to produce reliable results when data is normal or at least symmetrical because it is highly affected by the presence of outliers as well as due to the departure from normality. It means that the sample mean is not a robust estimator and hence cannot give trustworthy outcomes. To overcome this problem, we may use median and trimmed mean as alternative measures. The reason being that in case of non-normal distributions and/or when outliers are present in the data median and trimmed mean exhibit robust behavior and hence the results based on them are expected to become more reliable than mean.
Based on the above discussion we now suggest here three modifications/proposals to measure the association. These three proposals are modified forms of Spearman rank correlation coefficient, namely i) trimmed mean rank correlation by using standard deviation about trimmed mean; ii) median rank correlation by using standard deviation about median; iii) median rank correlation by using mean deviation about median. These three proposals are based on Spearman rank correlation coefficient in which we shall transform the variables into standard scores (like in Zimmerman (1994) using the measures given in (i)-(iii) above. We shall refer the three proposed modifications with the help of notations given in Table 2 throughout this chapter. Keeping intact the descriptions of equation (1)-(7) we now provide the explanation of the three proposed modified measures. Before that we defined here few terms used in the definitions of r T , r M M and r M S . These terms include Standard Deviation by using Trimmed Mean (denoted by SD 1 (X) and SD 1 (Y ) for X and Y respectively), Mean Deviation about Median (denoted by M DM (X) and M DM (Y ) for X and Y respectively) and Standard Deviation by using Median (denoted by SD 2 (X) and SD 2 (Y ) for X and Y respectively). These terms are defined as under: In equation (8), X t and Y t are the trimmed means of X and Y respectively.
In equation (9),X andỸ are the medians of X and Y respectively.
In equation (10), all the terms are as defined earlier.
Based on the above definitions we are now able to define r T , r M and r M S as under: For equation (11); first we separately transform the values of random variables X and Y into standard scores by using their respective trimmed means and standard deviation about trimmed means of their respective random sample from (X,Y ), assign the ranks to standard scores collectively and then separate the ranks according to their random samples. Now in equation (11), n i=1 d 2 i,T is the sum of the squares of the differences between the ranks. It is to be mentioned that we have trimmed 2 values from each sample, so the percentages of trimming in our computations are 33%, 25%, 20%, 17%, 13%, 10% and 7% of samples 6, 8, 10, 12, 16, 20 and 30 respectively.
For equation (12); first we separately transform the values of random variables X and Y into standard scores by using their respective medians and standard deviation about medians of their respective random variables from X and Y , assign the ranks to standard scores collectively and then separate the ranks according to their random samples. Now in equation (12) n i=1 d 2 i,M S is the sum of the squares of the differences between the ranks.
For equation (13); first we separately transform the values of random variables X and Y into standard scores by using their respective medians and mean deviation about medians of the respective random sample from (X, Y ), assign the ranks to standard scores collectively and then separate the ranks according to their random samples. Now in equation (13), is the sum of the squares of the differences between the ranks.
All the existing measures given in equation (1)-(7) and the proposed modifications given in equation (11)-(13) are nonparametric except the one given in equation (1). The existing measures as given equation (1)-(7) have many attractive properties in their own independent capacities (e.g. see Spearman 1904, Kendall 1938, Zimmerman 1994, Gauthier 2001, Yitzhaki 2003, Mudelsee 2003, Walker 2003, Maturi & Elsayigh 2010. But it is hard to find articles in the existing literature which compare the performance of these measures simultaneously under different distributional environments. The same is one of the motivations of this study. Additionally we plan to investigate the performances (in terms of power) of our proposed modifications under different probability models and also compare them with the existing counter parts. Although there are some other tests available to serve the purpose but the reason to choose these ten out of many is their novelty.
There are different ways to use the information (such as ratio, interval, ordinal and count) and each test has its own strategy to exploit this information. The tests considered here cover almost all of these common approaches. Although the results for the usual ones may be readily available but their comparisons in a broader frame will provide useful and interesting results. Actually the main objective of this study is to investigate the performance of these different methods/measures and see which of these have optimal efficiency under different distributional environments of the parent populations following line of action of Munir, Asghar & Riaz (2011).
This investigation would help us to grade the performance of these different methods for measuring and testing the association parameter under different parent situations. Consequently practitioners may take benefit out of it by picking up the most appropriate measure(s) to reach at the correct decision in a given situation. Practitioners generally prefer statistical measure(s) or method(s) which has higher power and they use it for their research proposals (cf. Mahoney & Magel 1996), so the findings of this research would be of great value for them for their future studies.

Performance Evaluations
Power is an important measure for the performance of a testing procedure. It is the probability of rejecting H 0 when it is false and it is the probability that a statistical measure(s)/procedure(s) will lead to a correct decision. In this section we intend to evaluate the power of the ten association measures under consideration in this study and find out which of them have relatively higher power(s) than the others under different parent situations. To calculate the power of different methods of measuring and testing the association under study we have followed the following procedure for power evaluations.
Let X and Y be the two correlated random variables referring to the two inter dependent characteristics of interest from where we have a random sample of n pairs in the form of (x 1 , y 1 ), (x 2 , y 2 ),. . . ,(x n , y n ) from a bivariate population. To get the desire level of correlation between X and Y the steps are listed as: • Let X and Y be independent random variables and Y be a transformed random variable defined as: Y = a(X) + b(W ); • The correlation between X and Y is given as: r XY = a √ a 2 +b 2 , where a and b are unknown constants; • The expression for a in the form of b and r XY may be written as a , and by putting the desire level of correlation in this equation we get the value of a; • For the above mentioned values of a and b we can now obtain the variables X and Y having our desired correlation level.
Hypotheses and Testing Procedures: For our study purposes we state the null and alternative hypotheses as: H 0 : ρ = 0 versus H 1 i.e. ρ > 0. This is a one sided version of the hypothesis that may be easily defined for two sided case. It is supposed that the samples are drawn under all the assumptions needed for a valid application of all the methods related with the association measures of this study. We compute the values of our test statistics for association measures by using all the ten methods for different choices of ρ (on positive side only because of right sided alternative hypothesis) and calculate their chances of rejecting H 0 by comparing them with their corresponding critical values. These probabilities under H 0 refer to the significance level while under H 1 this will be power of the test. It is to be mentioned that to test the aforementioned H 0 vs. H 1 , we have converted all the coefficients of association (except Kendall's tau) into the following statistic: where in equation (14), t a is the statistic of student t-distribution with n − 2 degrees of freedom (i.e. t n−2 ); r a is the correlation coefficient calculated by any of the association methods of this study.
Computational Details of Experimentation: We have computed powers of the ten methods of measuring and testing the association by fixing the significance level at α using a simulation code developed in MINITAB. The critical values at a given α are obtained from the table of t n−2 for all the measures given in Equation ((1)-(7) and (11)-(13)) and their corresponding test statistics given in Equation (14), except for Kendall's coefficient given in Equation (7). For the Kendall's tau coefficient (τ ) we have used the true critical values as given in Daniel (1990). The reason being that for all other cases the approximation given in Equation (14) is able to work fairly good but for the Kendall's tau coefficient it is not the case (as we here observed in our computations). The change in shape of the parent distribution demands an adjustment in the corresponding critical values. This we have done by our simulation algorithm for these ten methods to achieve the desired value of α. For different choices of ρ = 0, 0.25, 0.5 and 0.75 powers are obtained with the help of our simulation code in MINITAB at α significance level.
We have considered thirteen representative bivariate environments mentioned above for n = 6, 8,10,12,16,20,30 at varying values of α. For these choices of n, α we have run our MINITAB simulation code (developed for the ten methods under investigation here) 10,000 times for power computations. The resulting power values are given in the tables given in Appendix for all the thirteen probability distributions and the ten methods under study for some selective choices from the above mentioned values of n at α = 0.05. For the sake of brevity we omit the results at other choices of α like 0.01 and 0.005.

Comparative Analysis
This section presents a comparative analysis of the existing and proposed association measures. For ease in discussion and comparisons, the power values mentioned above are also displayed graphically in the form of power curves for all the aforementioned thirteen probability distributions by taking particular sample sizes and ten methods of association for some selective cases. These graphs are shown in Figures 1-13 where different values of ρ = 0, 0.25, 0.5 and 0.75 are taken on horizontal axis and the powers on vertical axis. Each figure is for a different parent distribution with different sample sizes and contains the power curves of all the ten methods. Labeling of the power curves in these figures is according to the notations given in Tables 1 and 2. It is advocated from the above power analysis (cf. Table A1-A13 and Figures  1-13) that: • With an increase in the value of n and/or ρ, power efficiency of all the association measures improves for all distributions.
• In general, Pearson correlation coefficient is superior to the Spearman rank correlation, Kendall's tau, modified rank correlation coefficient and proposed methods in normal distribution. However in some cases of normal distribution Gini correlation coefficients work better than the Pearson correlation coefficient.
• In non-normal distributions and in the case of outliers (contamination) the Pearson correlation coefficient grant a smaller amount of power than Spearman rank correlation, modified rank correlation coefficient and proposed methods except half normal, uniform, mixed normal and Laplace distributions. But Gini correlation coefficients r g1 and r g2 in general remain better in terms of power than Pearson correlation coefficient.
• Among the three Gini correlation coefficients r g1 performs better than r g2 and r g3 .
• The proposed three modifications grant improved power than the Spearman correlation coefficient, in general, for all the distributional environments. But in contaminated distributions the median rank correlation coefficient by using mean deviation about median works better than modified rank correlation coefficient for all sample sizes.
• Kendall's tau has inferior power than that of the Spearman rank correlation coefficient, modified rank correlation coefficient and the proposed methods. In Weibull, Mixed Weibull and Lognormal distributions, Kendall's tau has superior amount of power than the Gini mean correlation coefficient r g2 . But for these three distributions, if the sample size is greater than ten Kendall's tau has superior power performance than the Pearson correlation coefficient and Gini mean correlation coefficient r g3 . In the outlier cases, if the sample is moderate then Kendall's tau is superior to Pearson correlation coefficient and the two Gini mean correlation coefficients (r g2 and r g3 ) for moderate sample sizes.
• From the analysis above, it is pertinent to note that the Gini mean correlation coefficient r g1 is the best choice for measuring and testing the association than Spearman rank correlation coefficient, Kendall's tau, modified rank correlation coefficient and the proposed methods in normal, non-normal and contaminated distributions.
• The powers of r M M , r M S , r T and r M slightly differ from each others in all the distributional environments. It means that these are close competitors to each other.
It is to be mentioned that other testing measures may also be evaluated on the similar lines but we think that the options we have chosen cover the most practical ones.

Numerical Illustration
Besides the evidence in terms of statistical efficiency it is very useful to test a technique on some real data for their practical implications. For this purpose we consider here a data set from Zimmerman (1994) on two variables of scores. The data set is given in Table 3 which contains eight pair of scores as reported by Zimmerman (1994). We state our null hypothesis as: There is no association between the two variables (i.e. H 0 : ρ = 0) versus the alternative hypothesis H 1 : ρ > 0. By fixing the level of significance at α = 0.05, we apply all the ten methods and see what decisions they grant for the data set given in Table 3. The values of test statistic and their corresponding decisions are given in Table 4. The critical value used are: 0.571 for Kendall's tau and 1.94 for all the other tests.
It is obvious from the analysis of Table 4 that t P , t M and t T reject H 0 while all others do not reject H 0 . This is, in general, in accordance in the findings of Section 3. We may, therefore, sum up that this study will be of great use for the practitioners and researchers who make use of these measures frequently in their research projects.

Summary and Conclusions
This study has evaluated the performance of different association measures including some existing and few newly suggested modifications. One of these measures is parametric and the others non-parametric ones. Performance evaluations (in terms of power) and comparisons are carried out under different symmetric, skewed and contaminated probability distributions including Normal, Cauchy, Uniform, Laplace, Lognormal, Exponential, Weibull, Gamma, t, Chi-square, Half Normal, Mixed Weibull and Mixed Normal. Power evaluations of this study revealed that in normal distribution the Pearson correlation coefficient is the best choice to measure association. Further we have observed that the Pearson correlation coefficient and Gini's correlation coefficients (r g2 and r g3 ) have superior power performances than the Spearman rank correlation, The modified rank correlation and the proposed correlation coefficients for symmetrical and low peaked distributions. But in non-symmetrical and high peaked distributions the Spearman rank correlation, modified rank correlation and the proposed correlation coefficients worked with supreme power than the Pearson correlation coefficient and the two Gini's correlation coefficients (r g2 and r g3 ).
In contaminated distributions, r M M exhibited better performance than the modified rank correlation coefficient. The Gini's correlation coefficient (r g1 ) performed better than the Spearman rank correlation, modified rank correlation, Kendall's tau and the proposed correlation coefficie nts in symmetrical, asymmetrical, low peaked, highly peaked and contaminated distributions. Appendix Table A1: Probability of rejecting the null hypothesis of independence for N (0, 1). Electronic copy available at: https://ssrn.com/abstract=3534582 Table A2: Probability of rejecting the null hypothesis of independence for W (0.5, 3).  Table A3: Probability of rejecting the null hypothesis of independence for mixed Weibull distribution (i.e. W (0.5, 3) with probability 0.95 and W (1, 2) with probability 0.05. Electronic copy available at: https://ssrn.com/abstract=3534582 Table A4: Probability of rejecting the null hypothesis of independence for LG(5, 4).  Table A5: Probability of rejecting the null hypothesis of independence for Exp(0.5).     (4)).  Table A8: Probability of rejecting the null hypothesis of independence for contaminated Weibull (i.e. W (0.5, 3) with 5% outliers from W (50, 100)).    Table A11: Probability of rejecting the null hypothesis of independence for U (0, 1).