Skip to main content
Log in

Convex optimization methods for dimension reduction and coefficient estimation in multivariate linear regression

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

In this paper, we study convex optimization methods for computing the nuclear (or, trace) norm regularized least squares estimate in multivariate linear regression. The so-called factor estimation and selection method, recently proposed by Yuan et al. (J Royal Stat Soc Ser B (Statistical Methodology) 69(3):329–346, 2007) conducts parameter estimation and factor selection simultaneously and have been shown to enjoy nice properties in both large and finite samples. To compute the estimates, however, can be very challenging in practice because of the high dimensionality and the nuclear norm constraint. In this paper, we explore a variant due to Tseng of Nesterov’s smooth method and interior point methods for computing the penalized least squares estimate. The performance of these methods is then compared using a set of randomly generated instances. We show that the variant of Nesterov’s smooth method generally outperforms the interior point method implemented in SDPT3 version 4.0 (beta) (Toh et al. On the implementation and usage of sdpt3—a matlab software package for semidefinite-quadratic-linear programming, version 4.0. Manuscript, Department of Mathematics, National University of Singapore (2006)) substantially. Moreover, the former method is much more memory efficient.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Anderson T.W.: Estimating linear restriction on regression coefficients for multivariate normal distributions. Ann. Appl. Probab. 22, 327–351 (1951)

    MATH  Google Scholar 

  2. Bach F.: Consistency of trace norm minimization. J. Mach. Learn. Res. 8, 1019–1048 (2008)

    MathSciNet  Google Scholar 

  3. Ben-Tal, A., Nemirovski, A.: Lectures on Modern Convex Optimization: Analysis, algorithms, Engineering Applications. MPS-SIAM Series on Optimization. SIAM, Philadelphia (2001)

  4. Bertsekas D.: Nonlinear Programming. 2nd edn. Athena Scientific, New York (1999)

    MATH  Google Scholar 

  5. Breiman L.: Heuristics of instability and stabilization in model selection. Ann. Stat. 24, 2350–2383 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  6. Brooks R., Stone M.: Joint continuum regression for multiple predictands. J. Am. Stat. Assoc. 89, 1374–1377 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  7. Fazel, M., Hindi, H., Boyd, S.P.: A rank minimization heuristic with application to minimum order system approximation. In: Proceedings American Control Conference, vol. 6, pp. 4734–4739 (2001)

  8. Hiriart-Urruty J.-B., Lemaréchal C.: Convex Analysis and Minimization algorithms I, volume 305 of Comprehensive Study in Mathematics. Springer, New York (1993)

    Google Scholar 

  9. Hotelling H.: The most predictable criterion. J. Educational Psychol. 26, 139–142 (1935)

    Article  Google Scholar 

  10. Hotelling H.: Relations between two sets of variables. Biometrika 28, 321–377 (1936)

    MATH  Google Scholar 

  11. Izenman A.: Reduced-rank regression for the multivariate linear model. J. Multivar. Anal. 5, 248–264 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  12. Lu Z.: Smooth optimization approach for covariance selection. SIAM J. Optim. 19, 1807–1827 (2009)

    Article  MATH  Google Scholar 

  13. Lu, Z., Monteiro, R.D.C., Yuan, M.: Convex optimization methods for dimension reduction and coefficient estimation in multivariate linear regression. Technical report, Department of Mathematics, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada, January 2008

  14. Massy W.: Principle components regression with exploratory statistical research. J. Am. Stat. Assoc. 60, 234–246 (1965)

    Article  Google Scholar 

  15. Nesterov, Y. E.: A method for unconstrained convex minimization problem with the rate of convergence O(1/k 2). Doklady AN SSSR, 269:543–547, 1983. Translated as Soviet Math. Docl

  16. Nesterov Y.E.: Smooth minimization of nonsmooth functions. Math. Program. 103, 127–152 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  17. Recht, B., Fazel, M., Parrilo, P.A. (2007) Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. Technical report arXiv:0706.4138v1, arXiv

  18. Reinsel G., Velu R.: Multivariate Reduced-rank Regression: Theory and Application. Springer, New York (1998)

    Google Scholar 

  19. Tibshirani R.: Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc. B 58, 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  20. Toh, K.C., Tütüncü, R.H., Todd, M.J.: On the implementation and usage of sdpt3—a matlab software package for semidefinite-quadratic-linear programming, version 4.0. Manuscript, Department of Mathematics, National University of Singapore, July 2006

  21. Tseng, P.: On accelerated proximal gradient methods for convex–concave optimization. Manuscript, Department of Mathematics, University of Washington, May 2008

  22. van den Berg, E., Friedlander, M.: A root-finding approach for sparse recovery. Talk given at ISMP, Chicago, 23–28 August 2009

  23. van den Berg, E., Friedlander, M.: In pursuit of a root. Working paper, Department of Computer Science, University of British Columbia, November 2009

  24. Wold, H.: Soft modeling by latent variables: the nonlinear iterative partial least squares approach. In: In Perspectives in Probability and Statistics: Papers in Honor of M. S. Bartlett. Academic Press, New York (1975)

  25. Yuan M., Ekici A., Lu Z., Monteiro R.D.C.: Dimension reduction and coefficient estimation in multivariate linear regression. J. R. Stat. Soc. Series B Stat. Methodol. 69(3), 329–346 (2007)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhaosong Lu.

Additional information

Zhaosong Lu was supported in part by SFU President’s Research Grant and NSERC Discovery Grant. Renato D. C. Monteiro was supported in part by NSF Grants CCF-0430644, CCF-0808863 and CMMI-0900094 and ONR Grants N00014-05-1-0183 and N00014-08-1-0033. Ming Yuan was supported in part by NSF Grants DMS-0624841 and DMS-0706724.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lu, Z., Monteiro, R.D.C. & Yuan, M. Convex optimization methods for dimension reduction and coefficient estimation in multivariate linear regression. Math. Program. 131, 163–194 (2012). https://doi.org/10.1007/s10107-010-0350-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-010-0350-1

Keywords

Mathematics Subject Classification (2000)

Navigation