Skip to main content
Log in

Improved Meta-ELM with error feedback incremental ELM as hidden nodes

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Liao et al. (Neurocomputing 128:81–87, 2014) proposed a meta-learning approach to extreme learning machine (Meta-ELM), which can obtain good generalization performance by training multiple ELMs. However, one of its open problems is overfitting when minimizing training error. In this paper, we propose an improved meta-learning model of ELM (improved Meta-ELM) to handle the problem. The improved Meta-ELM architecture is composed of some base ELMs which are error feedback incremental extreme learning machine (EFI-ELM) and the top ELM. The improved Meta-ELM includes two stages. First, each base ELM with EFI-ELM is trained on a subset of training data. Then, the top ELM learns with the base ELMs as hidden nodes. Simulation results on some artificial and benchmark datasets show that the proposed improved Meta-ELM model is more feasible and effective than Meta-ELM.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Liu HP, Liu YH, Sun FC (2015) Robust exemplar extraction using structured sparse coding. IEEE Trans Neural Netw Learn Syst 26(8):1816–1821

    Article  MathSciNet  Google Scholar 

  2. Liu HP, Qin J, Sun FC, Di Guo Extreme kernel sparse learning for tactile object recognition, IEEE Trans Cybern, In press

  3. Yang YM, Jonathan Wu QM (2009) Mutilayer extreme learning machine with subnetwork nodes for representation learning. IEEE Trans Cybern 20(8):1352–1357

    Google Scholar 

  4. Cao J, Zhao Y, Lai X, Ong MEH, Yin C, Koh Z, Liu N (2015) Landmark recognition with sparse representation classification and extreme learning machine. J Frankl Inst 352(10):4528–4545

    Article  MathSciNet  Google Scholar 

  5. Huang GB, Bai Z, Kasun LLC, Vong CM (2015) Local receptive fields based extreme learning machine. IEEE Comput Intell Mag 10(2):18–29

    Article  Google Scholar 

  6. Guo D, Zhang Y, Xiao Z, Mao M, Liu J (2015) Common nature of learning between Bp-Type and Hopfield-type neural networks. Neurocomputing 167:578–586

    Article  Google Scholar 

  7. Qi XX, Yuan ZH, Han XW (2015) Diagnosis of misalignment faults by tacholess order tracking analysis and RBF networks. Neurocomputing 169:439–448

    Article  Google Scholar 

  8. Ekici S, Yildirim S, Poyraz M (2009) A transmission line fault locator based on Elman recurrent networks. Appl Soft Comput 9(1):341–347

    Article  Google Scholar 

  9. Huang GB, Chen L, Siew CK (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892

    Article  Google Scholar 

  10. Huang GB, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70:3056–3062

    Article  Google Scholar 

  11. Huang GB, Chen L (2008) Enhanced random search based incremental extreme learning machine. Neurocomputing 71(16–18):3460–3468

    Article  Google Scholar 

  12. Miche Y, Sorjamaa A, Bas P, Simula O, Jutten C, Lendasse A (2010) OP-ELM: optimally pruned extreme learning machine. IEEE Trans Neural Netw 21(1):158–162

    Article  Google Scholar 

  13. Feng G, Huang GB, Lin Q, Gay R (2009) Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Trans Neural Netw 20(8):1352–1357

    Article  Google Scholar 

  14. Lan Y, Soh YC, Huang GB (2010) Two-stage extreme learning machine for regression. Neurocomputing 73(1):3028–3038

    Article  Google Scholar 

  15. Yang YM, Wang YN, Yuan XF (2012) Bidirectional extreme learning machine for regression problem and its learning effectiveness. IEEE Trans Neural Netw Learn Syst 23:1498–1505

    Article  Google Scholar 

  16. Sun ZL, Choi TM, Au KF, Yu Y (2008) Sales forecasting using extreme learning machine with applications in fashion retailing. Decis Support Syst 46(1):411–419

    Article  Google Scholar 

  17. Lan Y, Soh YC, Huang GB (2009) Ensemble of online sequential extreme learning machine. Neurocomputing 135(72):3391–3395

    Article  Google Scholar 

  18. Liao SZ, Feng C (2014) Meta-ELM: ELM with ELM hidden nodes. Neurocomputing 128:81–87

    Article  Google Scholar 

  19. Horn RA, Johnson CR (2012) Matrix analysis. Cambridge University Press

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fenxi Yao.

Ethics declarations

Conflict of interest

The authors (Weidong Zou, Fenxi Yao, Baihai Zhang, Zixiao Guan) of paper (Title: Improved Meta-ELM with error feedback incremental ELM as hidden nodes, NCAA-D-16-00405-R2) declare that there is no conflict of interests.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zou, W., Yao, F., Zhang, B. et al. Improved Meta-ELM with error feedback incremental ELM as hidden nodes. Neural Comput & Applic 30, 3363–3370 (2018). https://doi.org/10.1007/s00521-017-2922-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-017-2922-y

Keywords

Navigation