Skip to main content
Log in

Model selection via standard error adjusted adaptive lasso

  • Published:
Annals of the Institute of Statistical Mathematics Aims and scope Submit manuscript

Abstract

The adaptive lasso is a model selection method shown to be both consistent in variable selection and asymptotically normal in coefficient estimation. The actual variable selection performance of the adaptive lasso depends on the weight used. It turns out that the weight assignment using the OLS estimate (OLS-adaptive lasso) can result in very poor performance when collinearity of the model matrix is a concern. To achieve better variable selection results, we take into account the standard errors of the OLS estimate for weight calculation, and propose two different versions of the adaptive lasso denoted by SEA-lasso and NSEA-lasso. We show through numerical studies that when the predictors are highly correlated, SEA-lasso and NSEA-lasso can outperform OLS-adaptive lasso under a variety of linear regression settings while maintaining the same theoretical properties of the adaptive lasso.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Belsley D. A., Kuh E., Welsch R. E. (1980) Regression diagnostics: Identifying influential data and sources of collinearity. Wiley, New York

    Book  MATH  Google Scholar 

  • Efron B., Hastie T., Johnstone I., Tibshirani R. (2004) Least angle regression. The Annals of Statistics 32: 407–499

    Article  MathSciNet  MATH  Google Scholar 

  • Fan J., Li R. (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96: 1348–1360

    Article  MathSciNet  MATH  Google Scholar 

  • Friedman J., Hastie T., Tibshirani R. (2010) Regularization paths for generalized linear models via coordinate descent.. Journal of Statistical Software 33: 1–22

    Google Scholar 

  • Harrison D., Rubinfeld D. L. (1978) Hedonic housing prices and the demand for clean air. Journal of Environmental Economics and Management 5: 81–102

    Article  MATH  Google Scholar 

  • Härdle, W., Simar, L. (2007). Applied multivariate statistical analysis (2nd ed.). New York: Springer.

  • Huang, J., Ma, S., Zhang, C.-H. (2008). Adaptive Lasso for sparse high-dimensional regression models. Statistica Sinica, 18, 1603–1618.

    Google Scholar 

  • Meinshausen N., Bühlmann P. (2006) High-dimensional graphs and variable selection with the lasso. The Annals of Statistics 34: 1436–1462

    Article  MathSciNet  MATH  Google Scholar 

  • Osborne, M., Presnell, B., Turlach, B. (2000). A new approach to variable selection in least squares problems. IMA Journal of Numerical Analysis, 20, 389–404.

    Google Scholar 

  • Shao J. (1993) Linear model selection by cross-validation. Journal of the American Statistical Association 88: 486–494

    MATH  Google Scholar 

  • Tibshirani R. (1996) Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B: Methodological 58: 267–288

    MathSciNet  MATH  Google Scholar 

  • Wang H., Leng C. (2007) Unified LASSO estimation via least squares approximation. Journal of the American Statistical Association 102: 1039–1048

    Article  MathSciNet  MATH  Google Scholar 

  • Wang H., Li R., Tsai C.-L. (2007) Tuning parameter selectors for the smoothly clipped absolute deviation method.. Biometrika 94: 553–568

    Article  MathSciNet  MATH  Google Scholar 

  • Yang Y. (2007) Consistency of cross validation for comparing regression procedures. The Annals of Statistics 35: 2450–2473

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang C.-H. (2010) Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics 38: 894–942

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang T. (2011a) Adaptive forward-backward greedy algorithm for learning sparse representations. IEEE Transactions on Information Theory 57: 4689–4708

    Article  Google Scholar 

  • Zhang, T. (2011b). Multi-stage convex relaxation for feature selection. arXiv:1106.0565.

  • Zhao P., Yu B. (2006) On model selection consistency of lasso. Journal of Machine Learning Research 7: 2541–2567

    MathSciNet  MATH  Google Scholar 

  • Zou H. (2006) The adaptive lasso and its oracle properties. Journal of the American Statistical Association 101: 1418–1429

    MATH  Google Scholar 

  • Zou H., Hastie T., Tibshirani R. (2007) On the degrees of freedom of the lasso. The Annals of Statistics 35: 2173–2192

    Article  MathSciNet  MATH  Google Scholar 

  • Zou H., Zhang H. H. (2009) On the adaptive elastic-net with a diverging number of parameters. The Annals of Statistics 37: 1733–1751

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Qian.

About this article

Cite this article

Qian, W., Yang, Y. Model selection via standard error adjusted adaptive lasso. Ann Inst Stat Math 65, 295–318 (2013). https://doi.org/10.1007/s10463-012-0370-0

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10463-012-0370-0

Keywords

Navigation