Skip to main content
Log in

Extreme learning machine: algorithm, theory and applications

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

Extreme learning machine (ELM) is a new learning algorithm for the single hidden layer feedforward neural networks. Compared with the conventional neural network learning algorithm it overcomes the slow training speed and over-fitting problems. ELM is based on empirical risk minimization theory and its learning process needs only a single iteration. The algorithm avoids multiple iterations and local minimization. It has been used in various fields and applications because of better generalization ability, robustness, and controllability and fast learning rate. In this paper, we make a review of ELM latest research progress about the algorithms, theory and applications. It first analyzes the theory and the algorithm ideas of ELM, then tracking describes the latest progress of ELM in recent years, including the model and specific applications of ELM, finally points out the research and development prospects of ELM in the future.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Benqio Y (2009) Learning deep architectures for AI. Found Trends Mach Learn 2(1):1–127

    Article  Google Scholar 

  • Cai L, Cheng G, Pan H (2010) Lithologic identification based on ELM. Comput Engi Des 31(9):210–2012

    Google Scholar 

  • Carpenter G, Grossberg S (2003) Adaptive resonance theory. In: Arbib MA (ed) The handbook of brain theory and neural networks, 2nd edn. MIT Press, Cambridge, pp 87–90

    Google Scholar 

  • Chang Y, Li Y, Wang F (2007) Soft sensing modeling based on extreme learning machine for biochemical processes. J Syst Simul 19(23):5587–5590

    Google Scholar 

  • Deng W, Zheng Q, Chen L et al (2010a) Research on extreme learning of neural networks. Chin J Comput 33(2):279–287

    Article  MathSciNet  Google Scholar 

  • Deng W, Zheng Q, Lian S et al (2010b) Ordinal extreme learning machine. Neurocomputing 74(1–3):447–456

    Article  Google Scholar 

  • Ding S, Jia W, Su C et al (2011a) Research of neural network algorithm based on factor analysis and cluster analysis. Neural Comput Appl 20(2):297–302

    Article  Google Scholar 

  • Ding S, Su C, Yu J et al (2011b) An optimizing BP neural network algorithm based on genetic algorithm. Artif Intell Rev 36(2):153–162

    Article  Google Scholar 

  • Ding S, Xu L, Chunyang SuC et al (2012) An optimizing method of RBF neural network based on genetic algorithm. Neural Comput Appl 21(2):333–336

    Article  Google Scholar 

  • Ding S, Jin F, Zhao X (2013) Modern data analysis and information pattern recognition. Science Press, Beijing

    Google Scholar 

  • Feng G, Huang G, Lin Q et al (2009) Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Trans Neural Netw 20(8):1352–1357

    Article  Google Scholar 

  • Fernandez-Navarro F, Hervas-Martinez C, Sanchez-Monedero J et al (2011) MELM-GRBF: A modified version of the extreme learning machine for generalized radial basis function neural networks. Neurocomputing 74(16):2502–2510

    Article  Google Scholar 

  • Hagan MT, Demuth HB, Beale MH (2002) Mechanical Industry Press, Beijing, China

  • Han F, Huang D (2006) Improved extreme learning machine for function approximation by encoding apriori information. Neurocomputing 69(16–18):2369–2373

    Article  Google Scholar 

  • Hornik K (1991) Approximation capabilities of multilayer feedforward networks. Neural Netw 4(2):251–257

    Article  Google Scholar 

  • Huang G (2003) Learning capability and storage capacity of two hidden-layer feedforward networks. IEEE Trans Neural Netw 14(2):274–281

    Article  Google Scholar 

  • Huang G (2003) Learning capability and storage capacity of two-hidden-layer feedforward network. IEEE Trans Neural Netw 14(2):274–281

    Article  Google Scholar 

  • Huang G, Babri HA (1998) Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions. IEEE Trans Neural Netw 9(1):224–229

    Article  Google Scholar 

  • Huang G, Liang N, Rong HJ et al. (2005) On-line sequential extreme learning machine. In: The IASTED international conference on, computational intelligence, pp. 232–237

  • Huang G, Wang D, Lan Y (2011) Extreme learning machines: a survey. Int J Mach Learn Cybern 2(2):107–122

    Article  Google Scholar 

  • Huang G, Zhu Q, Siew CK (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. IEEE Int Jt Conf Neural Netw 1–4:985–990

    Google Scholar 

  • Huang G, Zhu Q, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501

    Article  Google Scholar 

  • Kahramanli H, Allahverdi N (2009) Rule extraction from trained adaptive neural networks using artificial immune systems. Expert Syst Appl 36(2):1513–1522

    Article  Google Scholar 

  • Lan Y, Soh YC, Huang G (2010) Two-stage extreme learning machine for regression. Neurocomputing 73(2010):3028–3038

    Article  Google Scholar 

  • LeCun Y, Bengio Y (1995) Convolutional networks for images, speech, and time series. MIT Press, Cambridge

    Google Scholar 

  • Leshno M, Lin V, Pinkus A et al (1991) Multilayer feedforward networks with a non-polynomial activation function can approximate any function. Neural Netw 6(6):861–867

    Article  Google Scholar 

  • Li S, Chen SF, Liu B (2013) Accelerating a recurrent neural network to finite-time convergence for solving time-varying Sylvester equation by using a sign-bi-power activation function. Neural Process Lett 37:189–205

    Article  Google Scholar 

  • Li S, Chen SF, Liu B et al (2012) Decentralized kinematic control of a class of collaborative redundant manipulators via recurrent neural networks. Neurocomputing 91:1–10

    Article  MathSciNet  Google Scholar 

  • Li S, Liu B, Li YM (2013) Selective positive-negative feedback produces the winner-take-all competition in recurrent neural networks. IEEE Trans Neural Netw Learn Syst 24(2):301–309

    Article  Google Scholar 

  • Li S, Wang Y, Yu J et al (2013) A nonlinear model to generate the winner-take-all competition. Commun Nonlinear Sci Numer Simul 18(3):435–442

    Article  MATH  MathSciNet  Google Scholar 

  • Liang N, Huang G (2006) A fast and accurate online sequential learning algorithm for feed forward networks. IEEE Trans Neural Netw 17(6):1411–1423

    Article  Google Scholar 

  • Mao K (2002) RBF neural network center selection based on Fisher ratio class separability measure. IEEE Trans Neural Netw 13(5):1211–1217

    Article  Google Scholar 

  • Mao K, Huang G (2005) Neuron selection for RBF neural network classifier based on data structure preserving criterion. IEEE Trans Neural Netw 16(6):1531–1540

    Article  Google Scholar 

  • Markowska-Kaczmar U, Trelak W (2005) Fuzzy logic and evolutionary algorithm-two techniques in rule extraction from neural networks. Neurocomputing 63:359–379

    Article  Google Scholar 

  • Martinez-Martinez JM, Escandell-Montero P, Soria-Olivas E et al (2011) Regularized extreme learning machine for regression problems. Neurocomputing 74(17):3716–3721

    Article  Google Scholar 

  • McCulloch W, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133

    Google Scholar 

  • Mohamed MH (2011) Genetic algorithms. Neurocomputing 74(17):3180–3192

    Article  Google Scholar 

  • Mohammed AA, Minhas R, Wu Q et al (2011) Human face recognition based on multidimensional PCA and extreme learning machine. Pattern Recognit 44(10–11):2588–2597

    Article  MATH  Google Scholar 

  • Pan H, Cheng G, Cai L (2010) Comparison of the extreme learning machine with the support vector machine for reservoir permeability prediction. Comput Engi Sci 32(2):131–134

    Google Scholar 

  • Quteishat A, Lim CP (2008) A modified fuzzy min-max neural network with rule extraction and its application to fault detection and classification. Appl Soft Comput 8(2):985–995

    Article  Google Scholar 

  • Romero E, Alquezar R (2012) Comparing error minimized extreme learning machines and support vector sequential feed-forward neural networks. Neural Netw 25:122–129

    Article  Google Scholar 

  • Rong HJ, Huang G, Sundararajan N et al (2009) Online sequential fuzzy extreme learning machine for function approximation and classification problems. IEEE Trans Syst Man Cybern Part B Cybern 39(4):1067–1072

    Article  Google Scholar 

  • Rong HJ, Ong YS, Tan AH et al (2008) A fast pruned-extreme learning machine for classification problem. Neurocomputing 72(1–3):359–366

    Article  Google Scholar 

  • Shang L, Wang J, Yao W (2005) A classification approach based on evolutionary neural networks. J Softw 16(9):1577–1583

    Article  MATH  MathSciNet  Google Scholar 

  • Silva DNG, Pacifico LDS, Ludermir TB (2011) An evolutionary extreme learning machine based on group search optimization. IEEE Congress on Evolutionary Computation, pp. 574–580

  • Wang Y, Cao F, Yuan Y (2011) A study on effectiveness of extreme learning machine. Neurocomputing 74(16):2483–2490

    Article  Google Scholar 

  • Xu X, Ding S, Shi Z et al (2012) A novel optimizing method for RBF neural network based on rough set and AP clustering algorithm. J Zhejiang University-SCIENCE C 13(2):131–138

    Article  Google Scholar 

  • Yao W, Han M (2010) Fusion of thermal infrared and multispectral remote sensing images via neural network regression. J Image Graphics 15(8):1278–1284

    Google Scholar 

  • Zhang D, Wang Y (2009) Rough neural network based on bottom-up fuzzy rough data analysis. Neural Process Lett 30(3):187–211

    Article  Google Scholar 

  • Zhang X, Wang H (2011) Incremental regularized extreme learning machine based on Cholesky factorization and its application to time series prediction. Acta Physica Sinica 60(11):110201-1–111201-6

    Google Scholar 

  • Zhang X, Wang H (2011) Selective forgetting extreme learning machine and its application to time series prediction. Acta Physica Sinica, 60(8):080504-1–080504-6

    Google Scholar 

  • Zhu Q, Qin A, Suganthan PN et al (2005) Evolutionary extreme learning machine. Patt Recognit 38(10):1759–1763

    Article  MATH  Google Scholar 

Download references

Acknowledgments

This work is supported by the National Key Basic Research Program of China (No. 2013CB329502), the National Natural Science Foundation (41074003), the Opening Foundation of the Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (IIP2010-1), and the Opening Foundation of Beijing Key Lab of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shifei Ding.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ding, S., Zhao, H., Zhang, Y. et al. Extreme learning machine: algorithm, theory and applications. Artif Intell Rev 44, 103–115 (2015). https://doi.org/10.1007/s10462-013-9405-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10462-013-9405-z

Keywords

Navigation