Skip to main content

Abstract

Principal Component Analysis (PCA) (Jolliffe, Principal component analysis. Springer, 2011) is a very well-known and fundamental linear method for subspace learning and dimensionality reduction (Friedman et al., The elements of statistical learning. vol. 2. Springer series in statistics, 2009). This method, which is also used for feature extraction, was first proposed by Pearson in 1901 (Pearson, LIII. Lond Edinb Dublin Philos Mag J Sci 2, 559–572, 1901).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    If there are c classes, one-hot encoding of the j-th class is a binary vector with all zeros except the j-th element.

  2. 2.

    Haar wavelet is a family of square-shaped filters which form wavelet bases.

References

  1. Hervé Abdi and Lynne J Williams. “Principal component analysis”. In: Wiley interdisciplinary reviews: computational statistics 2.4 (2010), pp. 433–459.

    Google Scholar 

  2. Jonathan L Alperin. Local representation theory: Modular representations as an introduction to the local representation theory of finite groups. Vol. 11. Cambridge University Press, 1993.

    Google Scholar 

  3. Eric Bair et al. “Prediction by supervised principal components”. In: Journal of the American Statistical Association 101.473 (2006), pp. 119–137.

    Google Scholar 

  4. Elnaz Barshan et al. “Supervised principal component analysis: Visualization, classification and regression on subspaces and submanifolds”. In: Pattern Recognition 44.7 (2011), pp. 1357–1371.

    Google Scholar 

  5. Mikhail Belkin and Partha Niyogi. “Laplacian eigenmaps for dimensionality reduction and data representation”. In: Neural computation 15.6 (2003), pp. 1373–1396.

    Google Scholar 

  6. Raymond B Cattell. “The scree test for the number of factors”. In: Multivariate behavioral research 1.2 (1966), pp. 245–276.

    Google Scholar 

  7. Michael AA Cox and Trevor F Cox. “Multidimensional scaling”. In: Handbook of data visualization. Springer, 2008, pp. 315–347.

    Google Scholar 

  8. David L Donoho. “High-dimensional data analysis: The curses and blessings of dimensionality”. In: AMS math challenges lecture (2000).

    Google Scholar 

  9. Susan T Dumais. “Latent semantic analysis”. In: Annual review of information science and technology 38.1 (2004), pp. 188–230.

    Google Scholar 

  10. Brendan Frey. Frey face dataset. https://cs.nyu.edu/~roweis/data.html. Accessed: June 2017.

  11. Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The elements of statistical learning. Vol. 2. Springer series in statistics New York, NY, USA: 2009.

    Google Scholar 

  12. Ali Ghodsi. “Dimensionality reduction: a short tutorial”. In: Department of Statistics and Actuarial Science, Univ. of Waterloo, Ontario, Canada 37 (2006).

    Google Scholar 

  13. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT Press, 2016.

    MATH  Google Scholar 

  14. Arthur Gretton et al. “Measuring statistical dependence with Hilbert-Schmidt norms”. In: International conference on algorithmic learning theory. Springer. 2005, pp. 63–77.

    Google Scholar 

  15. Ji Hun Ham et al. “A kernel view of the dimensionality reduction of manifolds”. In: International Conference on Machine Learning. 2004.

    Google Scholar 

  16. Thomas Hofmann, Bernhard Schölkopf, and Alexander J Smola. “Kernel methods in machine learning”. In: The annals of statistics (2008), pp. 1171–1220.

    Google Scholar 

  17. Ian Jolliffe. Principal component analysis. Springer, 2011.

    MATH  Google Scholar 

  18. Shuangge Ma and Ying Dai. “Principal component analysis based methods in bioinformatics studies”. In: Briefings in bioinformatics 12.6 (2011), pp. 714–722.

    Google Scholar 

  19. Thomas P Minka. “Automatic choice of dimensionality for PCA”. In: Advances in neural information processing systems. 2001, pp. 598–604.

    Google Scholar 

  20. Hoda Mohammadzade et al. “Critical object recognition in millimeter-wave images with robustness to rotation and scale”. In: JOSA A 34.6 (2017), pp. 846–855.

    Google Scholar 

  21. Karl Pearson. “LIII. On lines and planes of closest fit to systems of points in space”. In: The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 2.11 (1901), pp. 559–572.

    Google Scholar 

  22. Sam T Roweis and Lawrence K Saul. “Nonlinear dimensionality reduction by locally linear embedding”. In: science 290.5500 (2000), pp. 2323–2326.

    Google Scholar 

  23. David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. “Learning representations by back-propagating errors”. In: Nature 323.6088 (1986), pp. 533–536.

    Google Scholar 

  24. Bernhard Schölkopf, Alexander Smola, and Klaus-Robert Müller. “Kernel principal component analysis”. In: International conference on artificial neural networks. Springer. 1997, pp. 583–588.

    Google Scholar 

  25. Bernhard Schölkopf, Alexander Smola, and Klaus-Robert Müller. “Nonlinear component analysis as a kernel eigenvalue problem”. In: Neural computation 10.5 (1998), pp. 1299–1319.

    Google Scholar 

  26. Radomir S Stanković and Bogdan J Falkowski. “The Haar wavelet transform: its status and achievements”. In: Computers & Electrical Engineering 29.1 (2003), pp. 25–44.

    Google Scholar 

  27. Harry Strange and Reyer Zwiggelaar. Open Problems in Spectral Dimensionality Reduction. Springer, 2014.

    Book  MATH  Google Scholar 

  28. Joshua B Tenenbaum, Vin De Silva, and John C Langford. “A global geometric framework for nonlinear dimensionality reduction”. In: science 290.5500 (2000), pp. 2319–2323.

    Google Scholar 

  29. The Digital Technology Group. AT&T Laboratories Cambridge. https://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html. Accessed: June 2017. 1994.

  30. Matthew Turk and Alex Pentland. “Eigenfaces for recognition”. In: Journal of cognitive neuroscience 3.1 (1991), pp. 71–86.

    Google Scholar 

  31. Matthew A Turk and Alex P Pentland. “Face recognition using eigenfaces”. In: Computer Vision and Pattern Recognition, 1991. Proceedings CVPR’91., IEEE Computer Society Conference on IEEE. 1991, pp. 586–591.

    Google Scholar 

  32. Yi-Qing Wang. “An analysis of the Viola-Jones face detection algorithm”. In: Image Processing On Line 4 (2014), pp. 128–148.

    Google Scholar 

  33. M-H Yang, Narendra Ahuja, and David Kriegman. “Face recognition using kernel eigenfaces”. In: Image processing, 2000. proceedings. 2000 international conference on. Vol. 1. IEEE. 2000, pp. 37–40.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Ghojogh, B., Crowley, M., Karray, F., Ghodsi, A. (2023). Principal Component Analysis. In: Elements of Dimensionality Reduction and Manifold Learning. Springer, Cham. https://doi.org/10.1007/978-3-031-10602-6_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-10602-6_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-10601-9

  • Online ISBN: 978-3-031-10602-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics