Skip to main content
Log in

Denoising deep extreme learning machine for sparse representation

  • Regular Research Paper
  • Published:
Memetic Computing Aims and scope Submit manuscript

Abstract

In recent years, a great deal of research has focused on the sparse representation for signal. Particularly, a dictionary learning algorithm, K-SVD, is introduced to efficiently learn an redundant dictionary from a set of training signals. Indeed, much progress has been made in different aspects. In addition, there is an interesting technique named extreme learning machine (ELM), which is an single-layer feed-forward neural networks (SLFNs) with a fast learning speed, good generalization and universal classification capability. In this paper, we propose an optimization method about K-SVD, which is an denoising deep extreme learning machines based on autoencoder (DDELM-AE) for sparse representation. In other words, we gain a new learned representation through the DDELM-AE and as the new “input”, it makes the conventional K-SVD algorithm perform better. To verify the classification performance of the new method, we conduct extensive experiments on real-world data sets. The performance of the deep models (i.e., Stacked Autoencoder) is comparable. The experimental results indicate the fact that our proposed method is very efficient in the sight of speed and accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  1. Aharon M, Elad M, Bruckstein A (2005) K-SVD: design of dictionaries for sparse representations. Proc Spars 5:9–12

  2. Aharon M, Elad M, Bruckstein A (2006) K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans Image Process 54(11):4311–4322

  3. Elad M, Aharon M (2006) Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans Image Process 15(12):3736–3745

  4. Mairal J, Elad M, Sapiro G (2008) Sparse representation for color image restoration. IEEE Trans Image Process 17(1):53–69

  5. Christopher P, Chopra S, Cun YL (2006) Efficient learning of sparse representations with an energy-based model. In: Advances in neural information processing systems (NIPS), pp 1137–1144

  6. Yang J, Yu K, Gong Y, Huang T (2009) Linear spatial pyramid matching using sparse coding for image classification. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

  7. Wright J, Yang A, Ganesh A, Sastry S, Ma Y (2009) Robust Face Recognition via Sparse Representation. IEEE Trans Pattern Anal Mach Intell 31(2):210–227

    Article  Google Scholar 

  8. Wang J, Su G, Xiong Y, Chen J, Shang Y, Liu J, Ren X (2013) Sparse representation for face recognition based on constraint sampling and face alignment. Tsinghua Sci Technol 1:62–67

    Article  MATH  Google Scholar 

  9. Zheng Y, Sheng H, Zhang B, Zhang J, Xiong Z (2015) Weight-based sparse coding for multi-shot person re-identification. Sci China Inf Sci 58(10):1–15

    Article  Google Scholar 

  10. Cheng H, Liu Z, Yang L, Chen X (2013) Sparse representation and learning in visual recognition: theory and applications. Sig Process 93(6):1408–1425

    Article  Google Scholar 

  11. Bengio Y, LeCun Y (2007) Scaling learning algorithms towards AI. In: Bottou L, Chapelle O, DeCoste D, Weston J (eds) Large-scale kernel machines, vol 34. MIT Press, pp 321–359

  12. Bengio Y (2009) Learning deep architectures for AI. Found Trends Mach Learn 2:1–55

    Article  MATH  Google Scholar 

  13. Vincent P, Larochelle H, Lajoie I, Bengio Y, Manzagol PA (2010) Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. Mach Learn Res 11:3371–3408

  14. Hinton GE, Osindero S (2006) A fast learning algorithm for deep belief nets. Neural Comput 18:1527–1554

  15. Hinton Geoffrey E, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507

    Article  MathSciNet  MATH  Google Scholar 

  16. Scott S, Matwin S (1999) Feature engineering for text classification. Int Conf Icml:379-388

  17. Dong J, Karianakis N, Davis D, Hernandez J, Balzer J, Soatto S (2015) Multi-view feature engineering and learning. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 3251–3260

  18. Salakhutdinov R, Larochelle H (2010) Efficient learning of deep Boltzmann machines. Mach Learn Res 9:693–700

    Google Scholar 

  19. Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70:489–501

  20. Yang Y, Wu J (2016) Mutilayer extreme learning machine with subnetwork nodes for representation learning. IEEE Trans Cybern (in press)

  21. Tang J, Deng C, Huang GB (2016) Extreme learning machine for multilayer perceptron, IEEE Trans Neural Netw Learn Syst (in press)

  22. Cao J, Lin Z (2015) Extreme learning machine on high dimensional and large data applications: a survey. Math Probl Eng:1–12

  23. Cao J, Lin Z, Huang G-B, Liu N (2012) Voting based extreme learning machine. Inf Sci 185(1):66–77

    Article  MathSciNet  Google Scholar 

  24. Shojaeilangari S, Yau WY, Nandakumar K, Li J, Teoh EK (2015) Robust Representation and Recognition of Facial Emotions Using Extreme Sparse Learning. IEEE Trans Image Process 24(7):2140–2152

  25. Sun Z, Yu Y (2015) Sparse coding extreme learning machine for classification. In: Extreme Learning Machine

  26. Peng Y, Lu BL (2015) Discriminative extreme learning machine with supervised sparsity preserving for image classification. In: Extreme Learning Machine

  27. Bai Z, Huang GB, Wang D (2015) Sparse extreme learning machine for regression. In: Extreme Learning Machine

  28. He B, Xu D, Nian R, van Heeswijk M, Yu Q, Miche Y, Lendasse A (2013) Fast face recognition via sparse coding and extreme learning machine. Cogn Comput 6(2):264–277

  29. Huang G, Zhu Q, Siew CK (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. Neural Netw 2:985–990

    Google Scholar 

  30. Rumelhart David E, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. Nature 323(6088):533–536

    Article  Google Scholar 

  31. Huang GB, Zhou H, Ding X, Zhang R (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern 42(2):513–529

  32. Tang J, Deng C, Huang GB (2015) Extreme learning machine for multilayer perceptron. IEEE Trans Neural Netw Learn Syst:1–13

  33. Cambria E, Huang GB (2013) Extreme learning machines. IEEE Trans Syst 28(6):30–59

  34. Widrow B, Greenblatt A, Kim Y, Park D (2013) The No-Prop algorithm: a new learning algorithm for multilayer neural networks. Neural Netw 37:182–188

    Article  Google Scholar 

  35. Johnson W, Lindenstrauss J (1984) Extensions of Lipschitz mappings into a Hilbert space. Modern Anal Probab 26:189–206

    MathSciNet  MATH  Google Scholar 

  36. Pavone M, Coello CAC (2012) Optimization on complex systems. Memetic Computing 4(3):163–164

  37. RubinsteinR, Bruckstein AM, Elad M (2010) Dictionaries for sparse representation modeling. Proc IEEE 98(6):1045–1057

  38. Wright J, Ma Y, Sapiro G, Huang TS, Yan S (2010) Sparse representation for computer vision and pattern recognition. Proc IEEE 98(6):1031–1044

  39. Elad M, Figueiredo Mario AT, Ma Y (2010) On the role of sparse and redundant representations in image processing. Proc IEEE 98(6):972–982

  40. Bruckstein AM, Donoho DL, Elad M (2010) From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev 51(1):34–81

  41. Engan K, Aase SO, Husoy Hskon J (1999) Method of optimal directions for frame design. IEEE Trans Signal Process 5:2443–2446

  42. Mallat SG, Zhang Z (1993) Matching pursuits with time-frequency dictionaries. IEEE Trans Signal Process 41(12):3397–3415

    Article  MATH  Google Scholar 

  43. Chen SS, Donoho DL, Saunders MA (2001) Atomic decomposition by basis pursuit. SIAM Rev 43(1):129–159

    Article  MathSciNet  MATH  Google Scholar 

  44. Tropp JA, Gilbert AC (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 53(12):4655–4666

    Article  MathSciNet  MATH  Google Scholar 

  45. Hull JJ (1994) A database for handwritten text recognition research. IEEE Trans Pattern Anal Mach Intell 16(5):550–554

    Article  Google Scholar 

  46. Ngiam J, Koh PW, Chen Z, Bhaskar S, Ng AY (2011) Sparse filtering. Adv Neural Inf Process Syst 2:1125–1133

Download references

Acknowledgments

This work was supported in part by the National Key Project for Basic Research of China under Grant 2013CB329403, the National Natural Science Foundation of China under Grant 61327809, the National High-Tech Research and Development Plan under Grant 2015AA042306, the Natural Science Foundation of Shanxi Province under Grant 2014011018-4, the Shanxi Scholarship Council of China under Grant 2013-033, and the Shanxi Scholarship Council of China under Grant 2015-045

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huaping Liu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cheng, X., Liu, H., Xu, X. et al. Denoising deep extreme learning machine for sparse representation. Memetic Comp. 9, 199–212 (2017). https://doi.org/10.1007/s12293-016-0185-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12293-016-0185-2

Keywords

Navigation