Next Article in Journal
Multiple Comparison Procedures for the Differences of Proportion Parameters in Over-Reported Multiple-Sample Binomial Data
Previous Article in Journal
New Equivalence Tests for Hardy–Weinberg Equilibrium and Multiple Alleles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Some Test Statistics for Testing the Regression Coefficients in Presence of Multicollinearity: A Simulation Study

by
Sergio Perez-Melo
and
B. M. Golam Kibria
*
Department of Mathematics and Statistics, Florida International University, University Park, Miami, FL 33199, USA
*
Author to whom correspondence should be addressed.
Stats 2020, 3(1), 40-55; https://doi.org/10.3390/stats3010005
Submission received: 21 January 2020 / Revised: 19 February 2020 / Accepted: 5 March 2020 / Published: 10 March 2020
(This article belongs to the Section Computational Statistics)

Abstract

:
Ridge regression is a popular method to solve the multicollinearity problem for both linear and non-linear regression models. This paper studied forty different ridge regression t-type tests of the individual coefficients of a linear regression model. A simulation study was conducted to evaluate the performance of the proposed tests with respect to their empirical sizes and powers under different settings. Our simulation results demonstrated that many of the proposed tests have type I error rates close to the 5% nominal level and, among those, all tests except one have considerable gain in powers over the standard ordinary least squares (OLS) t-type test. It was observed from our simulation results that seven tests based on some ridge estimators performed better than the rest in terms of achieving higher power gains while maintaining a 5% nominal size.

1. Introduction

Multicollinearity is the occurrence of high inter-correlations among independent variables in a multiple regression model. When this condition is present, it can result in unstable and unreliable regression coefficient estimates if the method of ordinary least squares is used. One of the proposed solutions to the problem of multicollinearity is the concept of ridge regression as pioneered by Hoerl and Kennard [1] They found that there is a nonzero value of k (ridge or shrinkage parameter) for which mean square error (MSE) for the ridge regression estimator is smaller than the variance of the ordinary least squares (OLS) estimator.
Estimating the shrinkage parameter (k) is a vital issue in the ridge regression model. Several researchers at different period of times have worked in this area of research and proposed different estimators for k. To mention a few, Hoerl and Kennard [1], Hoerl, Kennard and Baldwin [2], Lawless and Wang [3], Gibbons [4], Nomura [5], Kibria [6], Khalaf [7], Khalaf and Shukur [8], Alkhamisi and Shukur [9], Muniz and Kibria [10], Feras and Gore [11], Gruber [12], Muniz et al. [13], Mansson et al. [14], Hefnawy and Farag [15], Roozbeh and Arashi [16], Arashi and Valizadeh [17], Aslam [18], Asar and Karaibrahimoğlu [19], Saleh et al. [20], Asar and Erişoğlu [21], Goktas and Sevinc [22], Fallah et al. [23], Norouzirad and Arashi [24], and very recently Saleh et al. [25], among others
It is well known that, to make inference about an unknown population parameter, one may consider both confidence interval and hypothesis testing methods. However, the literature on the test statistics for testing the regression coefficients under the ridge regression model is very limited. First, Halawa and Bassiouni [26] proposed non exact t-tests for the regression coefficients under ridge regression estimation and compared empirical sizes and powers of only two tests based on the estimator of k proposed by Hoerl and Kennard [1] and Hoerl, Kennard, and Baldwin [2]. Their results evidenced that, for models with large standard errors, the ridge based t-tests have correct sizes with considerable gain in powers over those of the least squares t-test. For models with small standard errors, tests are found to be slightly exceeding the nominal level in few cases. Cule et al. [27] evaluated the performance of tests proposed by Hoerl and Kennard [1], Hoerl, Kennard, and Baldwin [2], and Lawless and Wang [3] based on linear ridge and logistic ridge regression models. Gokpinar and Ebegil [28] evaluated the performance of the t-tests based on 22 different estimators of the ridge parameter k collected from the published literature. Finally, Kibria and Banik [29] analyzed the performance of the t-tests based on 16 popular estimators of the ridge parameter.
Since different ridge regression estimators are considered by several researchers at different times and under different simulation methods and conditions, testing regression coefficients based on the basis of size (Type I error) and power properties under the ridge regression model are not comparable as a whole. Therefore, the important contribution of this paper is to make a more comprehensive comparison of a much larger ensemble of available t test statistics for testing regression coefficients. We consider in our analysis most of the ones analyzed in Gokpinar and Ebegil [28] and Kibria and Banik [29] as well as other test statistics based on other ridge estimators not included in the aforementioned studies at the same time. In total, our paper compares forty different t-tests statistics. The test statistics were compared based on the empirical type I error and the power properties following the testing procedures that are detailed in Halawa and Bassiouni [26]. These results are of interest for statistical practitioners using ridge regression in different fields of application as a guide to which test statistics to use when testing the significance of variables in their ridge regression models.
This paper is organized as follows. The proposed test statistics for the linear regression model are described in Section 2. To compare the performance of the test statistics, a simulation study is conducted in Section 3. An application is discussed in Section 4. Finally, some concluding remarks are given in Section 5.

2. Test Statistics for Regression Coefficients

Let us consider the following multiple linear regression model:
Y = X β + ε     ,   ε ~   N ( 0 ,   σ 2 I n )   ,   r a n k ( X n × q ) = q   n
Y is an (n × 1) dimensional vector of dependent variables centered about their mean, X is an (n × q) dimensional observed matrix of the regressors centered and scaled such that X T X is in correlation form, β is (q × 1) dimensional unknown coefficient vector, and ε is (n × 1) error vector distributed as multivariate normal with mean 0 and variance–covariance matrix σ 2 I n , where In is an (n × n) identity matrix.
The ordinary least square estimator (OLS) of the parameter vector β is:
β ^ = ( X T X ) 1 X T Y
To test whether the i-th component of the parameter vector β is equal to zero, the following test is used based on the OLS estimator:
H 0 :   β i = 0   v e r s u s   H 1 :   β i 0 t = β i ^ S ( β i ^ )
where β i ^ is the ith component of β ^ , and S ( β i ^ ) is the square root of the ith diagonal element of V a r   ( β ^ ) =   σ 2 ^ ( X T X ) 1 with:
σ 2 ^ = ( Y X β ^ ) T ( Y X β ^ ) / ( n q 1 )
The test statistic in Equation (3) is the least square test statistic. Under the null hypothesis, it is distributed as Student t- distribution with n−q−1 degrees of freedom. However, when X T X is ill conditioned due to multicollinearity, the least square estimator in (2) produces unstable estimators with unduly large sampling variance. Adding a constant k to the diagonal elements of X T X improves the ill conditioned situation. This is called ridge regression. The ridge estimator of the parameter vector β is then:
β ^ ( k ) = ( X T X + k I n ) 1 X T Y
where k > 0 is the ridge or shrinkage parameter.
The bias, the variance matrix, and the MSE expression of β ^ ( k ) are respectively given as follows:
B i a s = E ( β ^ ( k ) ) β = k ( X T X + k I p ) 1 β V a r   ( β ( k ) ^ ) =   σ 2 ( X T X + k I n ) 1 X T X ( X T X + k I n ) 1 M S E   ( β ( k ) ^ ) =   σ 2 ( X T X + k I n ) 1 X T X ( X T X + k I n ) 1 + k 2 β ( X T X + k I p ) 2 β
and σ 2 is estimated as follows:
σ k 2 ^ = ( Y X β ^ ( k ) ) T ( Y X β ^ ( k ) ) n q 1 .
To test whether the i-th component of the parameter vector β is equal to zero, Halawa and Bassouni [26] proposed the following t-test statistic based on the ridge estimator of the parameter vector:
t k = β ^ i ( k ) S ( β ^ i ( k ) )
where β ^ i ( k ) is the ith element of β ^ ( k ) , and S ( β ^ i ( k ) ) is the square root of the ith diagonal element of V a r   ( β ( k ) ^ ) .
Under the null hypothesis, the test statistic (7) was shown to be approximately distributed as a Student t-distribution with n q 1 degrees of freedom. For more details on this topic, see Halawa and Bassiouni [26], among others.

Values of the Ridge Estimator k Considered for the Test Statistic tk

Since the ridge parameter k is unknown, it needs to be estimated from observed data. This section gives the formulas for the forty different ridge regression estimators considered in our simulation study for the test statistic defined in (7). Table 1 below shows the estimators. For details on how the estimators were derived, we refer the readers to the corresponding original papers that are available in the list of references [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36].
Here, α ^ is defined as:
α ^ = P T β ^
where P is an orthonormal matrix that satisfies P T X T X P =   Λ , and Λ is a diagonal matrix of eigenvalues (λj j = 1, 2,…, q) of X T X . To calculate the value of test statistics, we considered each of the k values in Table 1 in Equations (4)–(7) and thus obtained 40 different values of the test statistics. Since a theoretical assessment among the test statistics was not possible, a simulation study was conducted to evaluate the performances of the suggested tests in the following section.

3. Simulation Study

Our simulation study has two parts. First, we analyzed the empirical Type I error of the tests. The test statistics that achieved the nominal size of 5% were kept, and the ones that deviated significantly from the 5% size were discarded. Then, the second part of the simulation study compared the tests statistics that achieved 5% nominal size in regards to statistical power.

3.1. Type I Error Rates Simulation Procedure

R Studio was used for all calculations of this paper. The R package lmridge was used to fit the ridge regression models. For the empirical Type I error simulation and the power of the test, we considered sample sizes n = 30, 50, 80, and 100, the number of regressors q = 4, 6, 8, 10, and 25, and the standard deviation of the error term was chosen as σ = 1. To see the effects of multicollinearity by stating the correlation matrix among the regressors, we assumed ρ = 0.80 and 0.95. An n × p matrix X was created as H Λ0.5 GT, where H is any (n × p) matrix whose columns are orthogonal, Λ is the diagonal matrix of eigenvalues of the correlation matrix, and G is the matrix of normalized eigenvectors of the correlation matrix, respectively. Following Halawa and Bassiouni [26], our study was based on the most favorable (MF) direction of β for model (1). The MF orientation of β corresponds to the largest normalized eigenvector of the matrix X T X , which is a vector of the form ( 1 / q ) 1 q . We chose not to use the least favorable orientation (LF) of β in our simulation, since all the literature available shows that both orientations give similar results in terms of Type I error and power. For a detailed explanation of MF and LF directions of β and other details of the simulation procedure, please see the paper by Halawa and Bassiouni [26].
To estimate the 5% nominal size (α = 0.05) for testing H 0 :   β i = 0   v e r s u s   H 1 :   β i 0 under different conditions, 5000 pseudo random vectors from N(0, σ2) were created to compute the error term in (1). Without loss of any generality, we let zero intercept for (1). Under the null model, substituting the i-th element of the considered MF β by zero, model (1) was used to find 5000 simulated vectors of Y. The estimated sizes were computed as the percentage of times the absolute values of all selected test statistics were greater than the critical value of t0.025, (n-q-1).

3.2. Type I Error Rates: Simulation Results

In Table 2 and Table 3, we recorded the empirical sizes of the tests for the MF orientation for correlation levels of 0.80 and 0.95, respectively
If the true Type I error rate is 5%, then, for a simulation based in 5000 runs, the observed Type I error will be in the following interval 95% of the times 0.05 ± 2 0.05 × 0.95 5000 ( 4.4 % ,   5.6 % ) . We did not consider those tests for comparison’s purpose whose observed average Type I error was not in the above range.
Based on the above tables, we observed the following:
(i)
The tests based on the following ridge estimators, KVR, KKibAM, KM2, KM3, KM4, KM6, KM8, KM9, KM10, KM12, KASH, KSG1, KSG2, and KD1, have Type I errors very well above the 5% nominal size and therefore cannot be recommended.
(ii)
The tests based on the following ridge estimators, KM11, KNOM, and KFG, did not surpass the 5% nominal size but stayed below it—around 3% to 4%—and therefore cannot be recommended.
(iii)
The rest of the tests (including the test based on the ordinary least squares estimator) were, on average, very close to the nominal size of 5% for different sample sizes, number of variables, and levels of correlation analyzed. These tests were the ones that were compared in terms of statistical power.
We also carried out simulations for nominal sizes of 10% and 1%, and the behavior of the tests was consistent with what was observed for a nominal size of 5%. Those results are available upon request. However, we are including a table of simulated Type I errors for nominal size 1% and correlation level 0.95 in Table 4 so that one can verify that the behavior of the tests was consistent with the results for 5% nominal size shown before.

3.3. Statistical Power Simulation Procedure

After calculating the empirical type I error rates of the tests based on our initial forty ridge estimators, we discarded seventeen that did not have a nominal size between 4.4% and 5.6%. The remaining twenty-three test statistics were compared in terms of power. Following the paper by Gokpinar and Ebegil [28], we replaced the i-th component of the β vector by J   w ( 0 ) σ β i , where J is a whole positive number, and w 2 ( 0 ) = ( 1 + ( q 2 ) ρ ) / [ ( 1 ρ ) ( 1 + ( q 1 ) ρ ) ] . We picked J = 6 since that value achieved approximately a power of 80% for the OLS test when q = 4, and having a sizeable power for the OLS test allowed for a better comparison with the other ones.
Based on 5000 simulation runs, the powers of the tests were computed by the proportion of times the absolute value of the test statistic exceeded the critical value t0.025, (n-q-1). All combinations of sample sizes of n = 30, 50, 100 and number of regressors q = 4, 6, 10 were considered under correlation levels of 0.80 and 0.95, respectively.

3.4. Statistical Power: Simulation Results

We recorded the empirical statistical power of the tests for the MF orientation for correlation levels of 0.8 and 0.95 in Table 5 and Table 6, respectively.
For a better visualization of the power of the ridge tests vs. the OLS test, we provided the power of the test for α = 0.05 and ρ = 0.80 and q = 4, 6, and 10 in Figure 1, Figure 2 and Figure 3, respectively.
For a better visualization of the power of the ridge tests vs. the OLS test, we provided the power of the test for α = 0.05 and ρ = 0.90 and q = 4, 6, and 10 in Figure 4, Figure 5 and Figure 6, respectively.
The following Table 7 provides the average gains in power for the tests with respect to the OLS test for both levels of correlations, namely 0.80 and 0.95.
Based on the above tables, we observed the following:
(i)
All the considered tests (with the exception of the one based on KKSAM when n = 30) achieved higher statistical power than the OLS test.
(ii)
Keeping the number of variables in the model fixed, if the sample size increased, the power of the tests also increased, as was expected.
(iii)
Keeping the sample size fixed, increasing the number of variables in the model decreased the power of the tests.
(iv)
Among the tests considered, the ones with the highest gain in power over the OLS test across different values of q, n, and ρ were the ones based on the following ridge estimators: KHKB, KKibMED, KKibGM, KM5, KKSMAX, KK12, and KD4. The observed gains over the OLS test were between 12% to 28% (see Table 7). Therefore, we recommend these seven tests to data analysis practitioners since they achieve the highest power among the ones considered while maintaining a 5% probability of Type I error.

4. Application Example

The following car consumption dataset available in the webpage http://data-mining-tutorials.blogspot.com/2010/05/solutions-for-multicollinearity-in.html (See Appendix A) was used to illustrate the finding of the paper.
The goal was to create a linear regression model to predict the consumption of cars from various characteristics such as: price, engine size, horsepower, and weight. There were n = 27 observations in the dataset. We made use of the mctest and the lmridge R packages in our computations. For more info on the functionality of the aforementioned packages, see Ullah, Aslam, and Altaf [36].
There was strong evidence of multicollinearity in the data, as evidenced by all of the VIFs (variance inflation factors) being greater than 10 (See Table 8 below).
Also, the condition number (CN), which is defined as C N = ( l a r g e s t   e i g e n v a l u e   ( X T X ) s m a l l e s t   e i g e n v a l u e ( X T X ) ) 2 = 38.3660 , was greater than 30, indicating high dependency between the explanatory variables. Since multicollinearity existed, ridge regression estimation was preferable to OLS estimation for this model. We contrasted the results of the OLS method with ridge regression using two of the ridge estimators that showed higher power, namely KKibMED and KKibGM, and the analyses are given in the following Table 9.
From Table 9, we observed that no variable except for weight was a significant predictor of car consumption under the OLS estimation. When ridge regression was applied, all variables (price, engine size, horsepower, and weight of the car) became significant predictors of car consumption, and the MSE [computed using Equation (5) of the coefficient vector also decreased compared to the OLS estimate, as is expected when a ridge regression approach is appropriate. Also, we could see that the sign of the coefficient of horsepower reversed from negative and not significant under the OLS estimation to positive and significant under ridge regression estimation. Change of sign in the coefficients is one of the signals that the ridge regression approach is a good fit for this particular problem according to what is explained in the foundational paper by Hoerl and Kennard [1]. Also, it makes physical sense that higher horsepower of the car leads to higher gas consumption, thus a positive sign for the coefficient would be the right choice.

5. Some Concluding Remarks

In this paper, we investigated forty different ridge regression estimators in order to find some good test statistics for testing the regression coefficients of the linear regression model in case of multicollinearity. A simulation study under different conditions was constructed to make the empirical comparison among the ridge regression estimators. We compared the performance of the test statistics based on the empirical size and the power of the test. It was observed from our simulations that the tests based on ridge estimators KHKB, KKibMED, KKibGM, KM5, KKSMAX, KK12, and KD4 were the best in terms of achieving higher power gains with respect to the OLS test while maintaining a 5% nominal size.
Our results are consistent with Kibria and Banik [29], although they did not conclude which tests were the best ones. While Gokpinar and Ebergil [28] concluded that the best tests in terms of power were the ones based on KHSL and KHKB, we found that the gains in power over the OLS test for KHSL are somewhat smaller than the gains in power for the tests based on the seven estimators we mentioned above, and therefore we did not include KHSL in our final list.
All in all, based on our simulation results, we recommend the tests based on KHKB, KKibMED, KKibGM, KM5, KKSMAX, KK12, and KD4 to statistical practitioners for the purpose of testing linear regression coefficients when multicollinearity is present.

Author Contributions

B.M.G.K. dedicates this paper to the Bangabandhu Sheikh Mujibur Rahman, the great leader and the Father of the Nation of Bangladesh. S.P.M. dedicates this paper to his parents, Consuelo de la Caridad Melo Seivane and Sergio Mariano Perez Trujillo. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

Authors are thankful to all four referees for their valuable comments and suggestions, which certainly improved the presentation and quality of the paper.

Conflicts of Interest

The authors declared no conflict of interest.

Appendix A

Table A1. Data set.
Table A1. Data set.
ModelPriceEngine SizeHorsepowerWeightGas Consumption
Daihatsu Cuore11600.00846.0032.00650.005.70
Suzuki Swift 1.0 GL12490.00993.0039.00790.005.80
Fiat Panda Mambo L10450.00899.0029.00730.006.10
VW Polo 1.4 6017140.001390.0044.00955.006.50
Opel Corsa 1.2i Eco14825.001195.0033.00895.006.80
Subaru Vivio 4WD13730.00658.0032.00740.006.80
Toyota Corolla19490.001331.0055.001010.007.10
Opel Astra 1.6i 16V25000.001597.0074.001080.007.40
Peugeot 306 XS 10822350.001761.0074.001100.009.00
Renault Safrane 2.236600.002165.00101.001500.0011.70
Seat Ibiza 2.0 GTI22500.001983.0085.001075.009.50
VW Golt 2.0 GTI31580.001984.0085.001155.009.50
Citroen ZX Volcane28750.001998.0089.001140.008.80
Fiat Tempra 1.6 Lib22600.001580.0065.001080.009.30
Fort Escort 1.4i PT20300.001390.0054.001110.008.60
Honda Civic Joker 119900.001396.0066.001140.007.70
Volvo 850 2.539800.002435.00106.001370.0010.80
Ford Fiesta 1.2 Zet19740.001242.0055.00940.006.60
Hyundai Sonata 300038990.002972.00107.001400.0011.70
Lancia K 3.0 LS50800.002958.00150.001550.0011.90
Mazda Hachtback V36200.002497.00122.001330.0010.80
Opel Omega 2.5i V647700.002496.00125.001670.0011.30
Peugeot 806 2.036950.001998.0089.001560.0010.80
Nissan Primera 2.026950.001997.0092.001240.009.20
Seat Alhambra 2.036400.001984.0085.001635.0011.60
Toyota Previa salon50900.002438.0097.001800.0012.80
Volvo 960 Kombi aut49300.002473.00125.001570.0012.70

References

  1. Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for non-orthogonal problems. Technometrics 1970, 12, 55–67. [Google Scholar] [CrossRef]
  2. Hoerl, A.E.; Kennard, R.W.; Baldwin, K.F. Ridge regression: Some simulation. Commun. Stat. Theory Methods 1975, 4, 105–123. [Google Scholar] [CrossRef]
  3. Lawless, J.F.; Wang, P. A simulation study of ridge and other regression estimators. Commun. Stat. Simul. Comput. 1976, 5, 307–323. [Google Scholar]
  4. Gibbons, D.G. A simulation study of some ridge estimators. J. Am. Stat. Assoc. 1981, 76, 131–139. [Google Scholar] [CrossRef]
  5. Nomura, M. On the almost unbiased ridge regression estimation. Commun. Stat. Simul. Comput. 1988, 17, 729–743. [Google Scholar] [CrossRef]
  6. Kibria, B.M.G. Performance of some new ridge regression estimators. Commun. Stat. Simul. Comput. 2003, 32, 419–435. [Google Scholar] [CrossRef]
  7. Khalaf, G. A Proposed Ridge Parameter to Improve the Least Square Estimator. J. Mod. Appl. Stat. Methods 2012, 11, 443–449. [Google Scholar] [CrossRef] [Green Version]
  8. Khalaf, G.; Shukur, G. Choosing ridge parameters for regression problems. Commun. Stat. Theory Methods 2005, 34, 1177–1182. [Google Scholar] [CrossRef]
  9. Alkhamisi, M.; Shukur, G. Developing ridge parameters for SUR model. Commun. Stat. Theory Methods 2008, 37, 544–564. [Google Scholar] [CrossRef]
  10. Muniz, G.; Kibria, B.M.G. On some ridge regression estimators: An empirical comparison. Commun. Stat. Simul. Comput. 2009, 38, 621–630. [Google Scholar] [CrossRef]
  11. Feras, M.B.; Gore, S.D. Ridge regression estimator: Combining unbiased and ordinary ridge regression methods of estimation. Surv. Math. Appl. 2009, 4, 99–109. [Google Scholar]
  12. Gruber, M.H.J. Improving Efficiency by Shrinkage the James-Stein and Ridge Regression Stimators; Marcel Dekker: New York, NY, USA, 1998. [Google Scholar]
  13. Muniz, G.; Kibria, B.M.G.; Mansson, K.; Shukur, G. On developing ridge regression parameters: A graphical investigation. Stat. Oper. Res. Trans. 2012, 36, 115–138. [Google Scholar]
  14. Mansson, K.; Shukur, G.; Kibria, B.M.G. On some ridge regression estimators: A Monte Carlo simulation study under different error variances. J. Stat. 2010, 17, 1–22. [Google Scholar]
  15. Hefnawy, E.A.; Farag, A. A combined nonlinear programming model and Kibria method for choosing ridge parameter regression. Commun. Stat. Simul. Comput. 2013. [Google Scholar] [CrossRef]
  16. Roozbeh, M.; Arashi, M. Feasible ridge estimator in partially linear models. J. Multivar. Anal. 2013, 116, 35–44. [Google Scholar] [CrossRef]
  17. Arashi, M.; Valizadeh, T. Performance of Kibria’s methods in partial linear ridge regression model. Stat. Pap. 2014, 56, 231–246. [Google Scholar] [CrossRef]
  18. Aslam, M. Performance of Kibria’s method for the heteroscedastic ridge regression model: Some Monte Carlo evidence. Commun. Stat. Simul. Comput. 2014, 43, 673–686. [Google Scholar] [CrossRef]
  19. Asar, Y.; Karaibrahimoğlu, G.A. Modified ridge regression parameters: A comparative Monte Carlo study. Hacet. J. Math. Stat. 2014, 43, 827–841. [Google Scholar]
  20. Saleh, A.K.M.E.; Arashi, M.; Tabatabaey, S.M.M. Statistical Inference for Models with Multivariate T-Distributed Errors; John Wiley: Hoboken, NJ, USA, 2014. [Google Scholar]
  21. Asar, Y.; Erişoğlu, M. Influence diagnostics in two-parameter ridge regression. J. Data Sci. 2016, 14, 33–52. [Google Scholar]
  22. Goktas, A.; Sevnic, V. Two new ridge parameters and a guide for selecting an appropriate ridge parameter in linear regression. Gazi Univ. J. Sci. 2016, 29, 201–211. [Google Scholar]
  23. Fallah, R.; Arashi, M.; Tabatabasy, S.M.M. On the ridge regression estimator with sub-space restriction. Commun. Stat. Theory Methods 2017, 46, 11854–11865. [Google Scholar] [CrossRef]
  24. Norouzirad, M.; Arashi, M. Preliminary test and stein-type shrinkage ridge estimators in robust regression. Stat. Pap. 2019, 60, 1849–1882. [Google Scholar] [CrossRef]
  25. Saleh, A.K.M.E.; Arashi, M.; Kibria, B.M.G. Theory of Ridge Regression Estimation with Applications; John Woley: Hoboken, NJ, USA, 2019. [Google Scholar]
  26. Halawa, A.M.; Bassiouni, M.Y. Tests of regression coefficients under ridge regression models. J. Stat. Simul. Comput. 2000, 65, 341–356. [Google Scholar] [CrossRef]
  27. Cule, E.; Vineis, P.; De Iorio, M. Significance testing in ridge regression for genetic data. BMC Bioinform. 2011, 12, 372. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Gökpınar, E.; Ebegil, M. A study on tests of hypothesis based on ridge estimator. Gazi Univ. J. Sci. 2016, 29, 769–781. [Google Scholar]
  29. Kibria, B.M.G.; Banik, S. A simulation study on the size and power properties of some ridge regression tests. Appl. Appl. Math. Int. J. (AAM) 2019, 14, 741–761. [Google Scholar]
  30. Hocking, R.R.; Speed, F.M.; Lynn, M.J. A class of biased estimators in linear regression. Technometrics 1976, 18, 425–437. [Google Scholar] [CrossRef]
  31. Thisted, R.A. Ridge Regression, Minimax Estimation and Empirical Bayes Methods; Technical Report 28; Division of Biostatistics, Stanford University: Stanford, CA, USA, 1976. [Google Scholar]
  32. Venables, W.N.; Ripley, B.D. Modern Applied Statistics with S, 4th ed.; Springer: New York, NY, USA, 2002; ISBN 0-387-95457-0. [Google Scholar]
  33. Dorugade, A.; Kashid, D. Alternative Method for Choosing Ridge Parameter for Regression. Appl. Math. Sci. 2010, 4, 447–456. [Google Scholar]
  34. Schaefer, R.; Roi, L.; Wolfe, R. A ridge logistic estimator. Commun. Stat. Theory Methods 1984, 13, 99–113. [Google Scholar] [CrossRef]
  35. Dorugade, A. New Ridge Parameters for Ridge Regression. J. Assoc. Arab Univ. Basic Appl. Sci. 2014, 15, 94–99. [Google Scholar] [CrossRef] [Green Version]
  36. Ullah, M.I.; Aslam, M.; Altaf, S. lmridge: A comprehensive R package for ridge regression. R J. 2018, 10, 326–346. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Power of the test for correlation coefficient 0.80 and q = 4.
Figure 1. Power of the test for correlation coefficient 0.80 and q = 4.
Stats 03 00005 g001
Figure 2. Power of the test for correlation coefficient 0.80 and q = 6.
Figure 2. Power of the test for correlation coefficient 0.80 and q = 6.
Stats 03 00005 g002
Figure 3. Power of the test for correlation coefficient 0.80 and q = 10.
Figure 3. Power of the test for correlation coefficient 0.80 and q = 10.
Stats 03 00005 g003
Figure 4. Power of the test for correlation coefficient 0.95 and q = 4.
Figure 4. Power of the test for correlation coefficient 0.95 and q = 4.
Stats 03 00005 g004
Figure 5. Power of the test for correlation coefficient 0.95 and q = 6.
Figure 5. Power of the test for correlation coefficient 0.95 and q = 6.
Stats 03 00005 g005
Figure 6. Power of the test for correlation coefficient 0.95 and q = 10.
Figure 6. Power of the test for correlation coefficient 0.95 and q = 10.
Stats 03 00005 g006
Table 1. Some ridge regression estimators.
Table 1. Some ridge regression estimators.
AuthorsRidge Estimator Formula (k)
Hocking, Speed and Lynn [30] k H S L = σ 2 ^ j = 1 q α ^ j 2 λ j 2 / ( j = 1 q α ^ j 2 λ j ) 2
Hoerl and Kennard [1] k H K 70 = σ 2 ^ max ( α ^ 2 )
Thisted [31] k T H = ( q 2 ) σ 2 ^ / j = 1 q β ^ j 2
Vennables and Rippley [32] k V R = n ( q 2 ) σ 2 ^ / ( β ^ T X T X β ^ )
Lawless and Wang [3] k L W = q σ 2 ^ / j = 1 q α ^ j 2 λ j
Hoerl, Kennard and Baldwin [2] k H K B = q σ 2 ^ / j = 1 q α ^ j 2
Kibria [6] k K i b A M = A r i t h m e t i c   M e a n   ( σ 2 ^ / α ^ j 2 )
Kibria [6] k K i b G M = G e o m e t r i c   M e a n   ( σ 2 ^ / α ^ j 2 )
Kibria [6] k K i b M E D = M e d i a n ( σ 2 ^ / α ^ j 2 )
Muniz and Kibria [10] k M 2 = max ( 1 / ( σ 2 ^ / α ^ j 2 ) 1 2 )
Muniz and Kibria [10] k M 3 = max ( ( σ 2 ^ / α ^ j 2 ) 1 2 )
Muniz and Kibria [10] k M 4 = G e o m e t r i c   M e a n ( 1 / ( σ 2 ^ / α ^ j 2 ) 1 2 )
Muniz and Kibria [10] k M 5 = G e o m e t r i c   M e a n ( ( σ 2 ^ / α ^ j 2 ) 1 2 )
Muniz and Kibria [10] k M 6 = M e d i a n ( 1 / ( σ 2 ^ / α ^ j 2 ) 1 2 )
Muniz et al. [13] k M 8 = max ( 1 / g j ) ,   w h e r e   g j = ( λ m a x σ 2 ) ^ / ( ( n q ) σ 2 ^ + λ m a x α ^ j 2 )
Muniz et al. [13] k M 9 = max ( g j )
Muniz et al. [13] k M 10 = G e o m e t r i c   M e a n ( 1 / g j )
Muniz et al. [13] k M 11 = G e o m e t r i c   M e a n ( g j )
Muniz et al. [13] k M 12 = M e d i a n ( 1 / g j )
Dorugade and Kashid [33] k D = max ( 0 ,   k H K B 1 / ( n V I F m a x )
Khalaf and Shukur [8] k K S = ( λ m a x σ 2 ^ ) / ( ( n q ) σ 2 ^ + λ m a x α ^ m a x 2 )
Khalaf and Shukur [8] k K 12 = k H K 70 + 2 / ( λ m a x + λ m i n )
Alkhamisi and Shukur [9] k K S A M = A r i t h m e t i c   M e a n ( m j )
m j =   ( λ j σ 2 ^ ) / ( ( n q ) σ j 2 ^ + λ j α ^ j 2 )
Alkhamisi and Shukur [9] k K S M A X = max ( m j )
Alkhamisi and Shukur [9] k K S M E D = M e d i a n ( m j )
Alkhamisi and Shukur [9] k A S H = max ( σ 2 ^ / α ^ j 2 + 1 / λ j )
Schaffer et al. [34] k S C = 1 / max ( α ^ 2 )
Asar et al. [21] k A 1 = ( q 2 σ 2 ^ ) / ( λ m a x 2 j = 1 q α ^ j 2 )
Asar et al. [21] k A 2 = ( q 3 σ 2 ^ ) / ( λ m a x 3 j = 1 q α ^ j 2 )
Asar et al. [21] k A 3 = ( q σ 2 ^ ) / ( λ m a x 1 / 3 j = 1 q α ^ j 2 )
Asar et al. [21] k A 4 = ( q 2 σ 2 ^ ) / { ( j = 1 q λ j ) 1 3 j = 1 q α ^ j 2 }
Asar et al. [21] k A 5 = ( 2 q σ 2 ^ ) / ( λ m a x 1 / 2 j = 1 q α ^ j 2 )
Nomura [22] k N O M = q σ 2 ^ / j = 1 q [ α ^ j 2 / { 1 + ( 1 + λ j   ( ( α ^ j 2 / σ 2 ^ ) ) 1 2 ) } ]
Goktas and Sevinc [22] k G S 1 = M e d i a n ( ( σ 2 ^ / α ^ j 2 ) 1 2 )
Goktas and Sevinc [22] k G S 2 = σ 2 ^ / ( M e d i a n ( ( σ 2 ^ / α ^ j 2 ) 1 2 ) ) 2
Dorugade [35] k D 1 = A r i t h m e t i c M e a n ( ( 2 σ 2 ^ ) / ( λ m a x α ^ j 2 ) )
Dorugade [35] k D 2 = M e d i a n   ( ( 2 σ 2 ^ ) / ( λ m a x α ^ j 2 ) )
Dorugade [35] k D 3 = H a r m o n i c   M e a n ( ( 2 σ 2 ^ ) / ( λ m a x α ^ j 2 ) )
Dorugade [35] k D 4 = G e o m e t r i c   M e a n ( ( 2 σ 2 ^ ) / ( λ m a x α ^ j 2 ) )
Feras and Gore [11] k F G = q σ 2 ^ / j = 1 q [ α ^ j 2 / { ( α ^ j 4 λ j 2 / 4 σ 4 ^ + 6 α ^ j 2 λ j / σ 2 ^ ) 1 2 α ^ j 2 λ j / 2 σ 2 ^ } ]
Table 2. Simulated Type I errors for ρ = 0.80 and α = 0.05 .
Table 2. Simulated Type I errors for ρ = 0.80 and α = 0.05 .
Statisticsn = 30n = 50n = 80n = 100
q: 4688101025Average Type I Error
tOLS0.05160.05530.05090.05250.04860.04900.05060.051
tKHSL0.05370.05440.04410.05290.04450.04730.04160.048
tKHK700.05110.05460.04790.05320.04700.04780.04890.050
tKTH0.05130.05370.04420.05220.04540.04720.04170.048
tKVR0.11390.17450.20690.32800.39670.48640.88030.370
tKLW760.04730.04930.03490.04850.04180.04890.03460.044
tKHKB0.05480.05260.04340.05250.04470.04730.04120.048
tKKibAM0.08280.11520.13270.18560.22490.26120.51620.217
tKKibMED0.05970.05880.04440.05890.05200.05430.03690.052
tKKibGM0.06390.06690.04870.06860.05980.06560.03940.059
tKM20.09550.13330.13870.18790.20630.23190.25720.179
tKM30.09170.12770.14880.20370.24700.28770.58480.242
tKM40.07700.07950.06030.08490.07440.08590.04780.073
tKM50.06500.06850.04970.06910.06120.06590.03960.060
tKM60.07980.08610.06920.09610.08420.09700.05720.081
tKM80.12290.17360.18360.29210.31800.41190.44400.278
tKM90.04280.05810.07080.05100.06610.04360.18020.073
tKM100.11970.14510.10620.24680.21740.36200.12860.189
tKM110.04200.03460.02820.04190.03040.04500.03070.036
tKM120.12040.15140.12090.25400.23220.36650.16160.201
tKD0.05330.05700.05110.05360.04770.04830.04400.051
tKKS0.05070.05460.04790.05320.04730.04780.04930.050
tKKSAM0.04990.05290.04580.05290.04750.04810.05030.050
tKKSMAX0.04820.04950.03990.04830.04080.04590.03490.044
tKKSMED0.05180.05500.04970.05310.04850.04880.05040.051
tKSC0.05200.05450.04730.05300.04690.04770.04860.050
tKA10.05080.05490.04870.05320.04800.04790.05040.051
tKA20.05110.05480.04870.05310.04760.04800.05030.051
tKA30.05270.05360.04550.05190.04660.04680.04680.049
tKA40.05280.05340.04530.05210.04630.04690.04620.049
tKA50.05220.05430.04820.05300.04740.04810.05040.051
tKASH0.12230.18480.22380.32370.39510.45640.87050.368
tKNOM0.07980.06930.04540.06520.05030.05680.03640.058
tKSG10.06430.06110.04100.06350.05250.05960.03790.054
tKSG20.08620.11250.12250.17670.20370.24320.44280.198
tKK120.05760.04990.03500.04820.04010.04390.04250.045
tKD10.07400.09200.09330.12500.14440.15830.20420.127
tKD20.05420.05250.04350.05120.04560.04760.04810.049
tKD30.05220.05430.04820.05300.04740.04810.05040.051
tKD40.05640.05420.04320.05270.04540.04820.04720.050
tKFG0.06440.05660.03910.05500.04460.05000.03570.049
Table 3. Simulated Type I errors for ρ = 0.95 and α = 0.05 .
Table 3. Simulated Type I errors for ρ = 0.95 and α = 0.05 .
Statisticsn = 30n = 50n = 80n = 100
q: 4688101025Average Type I Error
tOLS0.04800.04890.05080.05220.05240.05100.05070.051
tKHSL0.05820.04670.03740.04990.04260.05070.03270.045
tKHK700.04800.04640.04870.05050.05080.04990.04950.049
tKTH0.04790.04420.04390.04820.04650.04770.04240.046
tKVR0.19130.30100.39540.51880.61530.68670.97990.527
tKLW760.06240.05370.04290.06200.05890.07040.03750.055
tKHKB0.04970.04330.04120.04670.04530.04700.04160.045
tKKibAM0.08980.13210.16150.20740.24740.27830.45820.225
tKKibMED0.06140.05100.04030.04990.04480.05100.03660.048
tKKibGM0.06300.05050.03930.05180.04670.05200.03310.048
tKM20.19030.29780.39150.48680.57100.63070.90650.496
tKM30.13590.19610.24730.30830.37050.40850.68320.336
tKM40.16940.22430.25480.30930.32430.36030.28930.276
tKM50.06340.05150.03950.05230.04720.05230.03310.048
tKM60.17300.23640.27930.33780.36830.41110.38800.313
tKM80.19140.30160.39620.51000.59170.66850.92870.513
tKM90.06590.10810.16730.13930.19560.13110.42970.177
tKM100.19060.27280.29400.44050.45410.60720.41270.382
tKM110.02960.04170.03880.03930.04690.03500.03140.038
tKM120.19210.26570.26350.45010.45380.62000.40600.379
tKD0.04810.04890.05080.05220.05240.05100.05070.051
tKKS0.04800.04640.04880.05080.05090.05020.04950.049
tKKSAM0.04430.03970.03610.04770.04510.04940.04720.044
tKKSMAX0.04520.05430.08540.04320.04800.04350.03760.051
tKKSMED0.04790.04820.05020.05260.05260.05070.05060.050
tKSC0.04830.04560.04810.05020.05050.04990.04940.049
tKA10.04830.04730.04940.05200.05260.05060.05040.050
tKA20.04840.04730.04940.05190.05250.05060.05040.050
tKA30.04830.04490.04590.04920.04870.04950.04830.048
tKA40.04900.04420.04510.04890.04740.04900.04690.047
tKA50.04790.04650.04840.05080.05130.05020.05030.049
tKASH0.18590.29860.39670.52380.61910.68500.97700.527
tKNOM0.06310.04450.03450.04450.04050.04540.03520.044
tKSG10.10400.09080.06760.09020.07860.08810.04180.080
tKSG20.11000.14840.17210.22470.25540.28530.38720.226
tKK120.08970.05080.03090.04570.03930.04580.03510.048
tKD10.07120.08920.09880.12410.13800.15300.16830.120
tKD20.05250.04250.04260.04840.04800.04790.04900.047
tKD30.04790.04650.04840.05080.05130.05020.05030.049
tKD40.05260.04350.04110.04700.04630.04700.04840.047
tKFG0.04830.03810.03180.04290.03900.04400.03460.040
Table 4. Simulated Type I errors for ρ = 0.95 and α = 0.01 .
Table 4. Simulated Type I errors for ρ = 0.95 and α = 0.01 .
Statisticsn = 30n = 50n = 80n = 100
p: 4688101025Average Type I Error Probability
tOLS0.01300.01380.00660.00980.00900.01060.00880.010
tKHSL0.01120.00920.00300.00620.00540.00800.00420.007
tKHK700.01260.01280.00620.00960.00900.01040.00840.010
tKTH0.01220.01260.00560.00920.00800.00940.00700.009
tKVR0.05140.09560.12280.22740.29460.38960.90000.297
tKLW0.00920.00700.00100.01020.00680.01180.00500.007
tKHKB0.01060.01180.00500.00920.00760.00920.00700.009
tKKibAM0.01980.03240.03600.07480.10180.12900.34140.105
tKKibMED0.01300.01100.00420.00860.00580.00820.00440.008
tKKibGM0.01220.00980.00460.00660.00560.00940.00440.008
tKM20.04820.09280.12220.21340.27840.33880.72740.260
tKM30.03300.05640.06120.11820.15440.18980.50780.160
tKM40.04740.07400.07500.12220.12720.15600.10940.102
tKM50.01200.01000.00460.00660.00560.00960.00440.008
tKM60.04780.07600.08360.13760.14880.18340.17060.121
tKM80.04540.09040.12220.22700.29160.37680.77180.275
tKM90.01440.02780.04060.04580.06880.04960.27040.074
tKM100.04570.08200.08500.18040.19770.30890.18710.155
tKM110.00620.00590.00150.00430.00420.00650.00280.004
tKM120.04940.08300.07440.19640.20580.33220.19140.162
tKD0.01300.01380.00660.00980.00900.01060.00880.010
tKKS0.01260.01280.00620.00960.00900.01040.00860.010
tKKSAM0.00920.00920.00580.00860.00680.00960.00720.008
tKKSMAX0.00820.01380.02400.00560.00620.00740.00560.010
tKKSMED0.01300.01320.00640.00980.00900.01080.00860.010
tKSC0.01220.01260.00620.00960.00880.01020.00820.010
tKA10.01280.01320.00640.01000.00900.01060.00880.010
tKA20.01280.01320.00640.00980.00900.01060.00880.010
tKA30.01140.01260.00620.00940.00840.00980.00800.009
tKA40.01100.01260.00580.00940.00800.00960.00780.009
tKA50.01220.01280.00620.00960.00900.01060.00880.010
tKASH0.03980.07480.09960.22000.29000.38840.89440.287
tKNOM0.01360.00800.00380.00620.00480.00780.00440.007
tKSG10.02500.01760.00640.01960.01240.02240.00680.016
tKSG20.02540.03800.03680.07360.10100.12540.28740.098
tKK120.02060.00860.00100.00700.00360.00720.00460.008
tKD10.01520.01860.01880.04080.05180.06340.10540.045
tKD20.01140.01120.00520.00800.00800.00900.00840.009
tKD30.01220.01280.00620.00960.00900.01060.00880.010
tKD40.01060.01100.00460.00820.00740.00860.00800.008
tKFG0.00980.00800.00260.00540.00440.00760.00440.006
Table 5. Powers of tests for ρ = 0.80 and α = 0.05.
Table 5. Powers of tests for ρ = 0.80 and α = 0.05.
Statisticsq = 4q = 6q = 10
n = 30n = 50n = 100n = 30n = 50n = 100n = 30n = 50n = 100
tOLS0.80740.82540.84960.63620.65320.67740.43180.44960.449
tKHSL0.87280.89120.9070.71980.7280.76340.46780.50920.524
tKHK700.87460.89120.90620.70340.71220.74780.4530.48040.4856
tKTH0.87900.89760.91540.73460.74780.78780.47660.52140.5406
tKLW760.88400.9010.91360.73980.75060.78780.4850.53960.5624
tKHKB0.93320.94680.9560.78060.80340.84020.48880.54300.566
tKKibMED0.96360.9750.98040.85580.87980.9060.56660.64540.6726
tKKibGM0.97060.97960.98580.89520.9160.93280.64720.72820.7546
tKM50.97180.980.9860.90040.91840.93360.65680.73220.7566
tKD0.83900.90120.94240.67320.73320.8080.44440.49660.5398
tKKS0.85340.8610.86840.68560.68820.70560.45020.47280.4724
tKKSAM0.85320.85180.8580.67880.68120.690.35780.47180.4596
tKKSMAX0.93620.90260.88220.86640.7660.73420.70060.62780.5148
tKKSMED0.81580.82980.850.64520.65740.6810.43760.45280.451
tKSC0.87200.88880.9050.70100.71140.74740.45420.48100.4856
tKA10.86120.87740.89520.67820.6890.72160.4450.46320.4692
tKA20.86980.88680.9040.68640.69720.73080.44720.46660.4734
tKA30.90420.91760.9330.72720.7370.77480.4630.49500.5094
tKA40.90640.91980.93460.73040.74060.77880.4650.49780.5118
tKA50.89360.90940.92660.70160.71060.74580.45080.47280.4788
tKK120.99340.99740.9980.89580.91580.93420.51580.59020.6074
tKD20.93840.9570.96420.76340.77940.81260.46940.49740.5086
tKD30.89360.90940.92660.70160.71060.74580.45080.47280.4788
tKD40.94940.96360.97160.79660.81520.84220.48260.52480.5394
Table 6. Powers of tests for ρ = 0.95 and α = 0.05.
Table 6. Powers of tests for ρ = 0.95 and α = 0.05.
Statisticsq = 4q = 6q = 10
n = 30n = 50n = 100n = 30n = 50n = 100n = 30n = 50n = 100
tOLS0.80760.82540.84160.62840.65280.67380.41560.44180.4592
tKHSL0.8910.90460.91640.74780.77020.7920.47680.55050.58
tKHK700.88120.89860.9110.69160.71460.73780.4280.4650.4844
tKTH0.88460.9060.91680.72640.76160.78640.44680.50180.5314
tKLW760.89260.90640.91980.75280.77760.79920.48340.5460.5964
tKHKB0.94120.95820.96640.7850.81680.84420.45460.52120.5526
tKKibMED0.96640.97960.98460.85540.8920.9050.53580.62020.6554
tKKibGM0.97540.98680.98940.9010.93180.94360.60940.70220.7344
tKM50.97760.98680.98980.9070.9340.94420.61940.7080.7374
tKD0.80760.82540.84160.62840.65280.67380.41560.44180.4592
tKKS0.87120.88440.88920.68480.70440.72080.4280.46280.4826
tKKSAM0.70960.93580.88680.41980.79780.72180.13080.51560.4874
tKKSMAX0.9530.99180.95980.79460.97340.8840.47560.87420.7224
tKKSMED0.8160.82860.8440.63480.6570.6750.41780.44420.4598
tKSC0.87680.89780.90960.6890.71360.73840.42940.46360.4856
tKA10.8490.87020.88420.65540.67820.70340.4220.45080.4676
tKA20.85080.87220.88540.65680.67940.70440.42220.45120.4678
tKA30.90880.92680.93560.71160.74180.76660.43520.47820.4984
tKA40.92160.93760.94380.72840.76260.78620.43780.48560.5094
tKA50.89020.9090.91920.67980.7040.7310.4260.45740.4768
tKK121110.99880.99920.99960.69380.77740.8222
tKD20.9370.95480.96320.73840.7680.78320.43580.48020.4986
tKD30.89020.9090.91920.67980.7040.7310.4260.45740.4768
tKD40.94920.96220.96960.77220.79980.82040.44840.50020.5206
Table 7. Average gain in power over the ordinary least squares (OLS) test for α = 0.05 and q = 0.06.
Table 7. Average gain in power over the ordinary least squares (OLS) test for α = 0.05 and q = 0.06.
StatisticsAverage Gain in Power Over OLS Test
ρ = 0.80ρ = 0.95
tKHSL7%10%
tKHK705%5%
tKTH8%8%
tKLW769%10%
tKHKB12%12%
tKKibMED19%18%
tKKibGM23%23%
tKM523%23%
tKD7%0%
tKKS3%4%
tKKSAM1%-2%
tKKSMAX13%21%
tKKSMED0%0%
tKSC5%5%
tKA14%3%
tKA24%3%
tKA38%7%
tKA48%9%
tKA56%5%
tKK1219%28%
tKD210%9%
tKD36%5%
tKD412%11%
Table 8. Variance inflation factors (VIFs) of the regressors.
Table 8. Variance inflation factors (VIFs) of the regressors.
VariableVIF
Price19.7919
Engine Size12.8689
Horsepower14.8922
Weight10.2260
Table 9. Regression analysis.
Table 9. Regression analysis.
VariableOLS (k = 0)KKibMED (k = 0.34766)KKibGM (k = 0.38798)
coeft-statp-valuecoeft-statp-valuecoeft-statp-value
Price3 × 10−50.76970.44974 × 10−57.8052<0.00014 × 10−58.2386<0.0001
Engine Size0.00121.71030.10130.00075.9079<0.00010.00076.2434<0.0001
Horsepower−0.0037−0.25460.80140.01084.54180.00010.01114.9219<0.0001
Weight0.00372.93300.00770.00227.6687<0.00010.00217.9105<0.0001
Intercept1.8380−0.94420.35533.1625−8.6380<0.00013.2371−9.1118<0.0001
M S E ( β ^ ) 23.434315.446016.0363

Share and Cite

MDPI and ACS Style

Perez-Melo, S.; Kibria, B.M.G. On Some Test Statistics for Testing the Regression Coefficients in Presence of Multicollinearity: A Simulation Study. Stats 2020, 3, 40-55. https://doi.org/10.3390/stats3010005

AMA Style

Perez-Melo S, Kibria BMG. On Some Test Statistics for Testing the Regression Coefficients in Presence of Multicollinearity: A Simulation Study. Stats. 2020; 3(1):40-55. https://doi.org/10.3390/stats3010005

Chicago/Turabian Style

Perez-Melo, Sergio, and B. M. Golam Kibria. 2020. "On Some Test Statistics for Testing the Regression Coefficients in Presence of Multicollinearity: A Simulation Study" Stats 3, no. 1: 40-55. https://doi.org/10.3390/stats3010005

Article Metrics

Back to TopTop