Skip to main content
Log in

Parameter self-tuning schemes for the two phase test sample sparse representation classifier

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Sparse Representation Classifier (SRC) and its variants were considered as powerful classifiers in the domains of computer vision and pattern recognition. However, classifying test samples is computationally expensive due to the \(\ell _1\) norm minimization problem that should be solved in order to get the sparse code. Therefore, these classifiers could not be the right choice for scenarios requiring fast classification. In order to overcome the expensive computational cost of SRC, a two-phase coding classifier based on classic Regularized Least Square was proposed. This classifier is more efficient than SRC. A significant limitation of this classifier is the fact that the number of the samples that should be handed over to the next coding phase should be specified a priori. This paper overcomes this main limitation and proposes five data-driven schemes allowing an automatic estimation of the optimal size of the local samples. These schemes handle the three cases that are encountered in any learning system: supervised, unsupervised, and semi-supervised. Experiments are conducted on five image datasets. These experiments show that the introduced learning schemes can improve the performance of the two-phase linear coding classifier adopting ad-hoc choices for the number of local samples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. http://www.itl.nist.gov/iad/humanid/feret.

  2. https://www.sheffield.ac.uk/eee/research/iel/research/face.

  3. http://www.cs.nyu.edu/roweis/data.html.

References

  1. Aha D, Kibler D, Albert M (1991) Instance-based learning algorithms. Mach Learn 6:37–66

    Google Scholar 

  2. Arun V, Krishna M, BV A, SK P, V S (2018) Exploratory boosted feature selection and neural network framework for depression classification. Int J Interact Multimed Artif Intell 5(3):61–71

  3. Chang X, Ma Z, Lin M, Yang Y, Hauptmann A (2017) Feature interaction augmented sparse learning for fast kinect motion detection. IEEE Trans Image Process

  4. Cheng B, Yang J, Yan S, Fu Y, Huang T (2010) Learning with l1-graph for image analysis. Trans Img Proc 19(4):858–866

    MATH  Google Scholar 

  5. Dems̆ar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  Google Scholar 

  6. Dornaika F, Bosaghzadeh A (2015) Adaptive graph construction using data self-representativeness for pattern classification. Inf Sci 325:118–139

    MathSciNet  MATH  Google Scholar 

  7. Dornaika F, Dhabi R, Bosaghzadeh A, Ruichek Y (2016) Dynamic adaptive graph construction: application to graph-based multi-observation classification. IEEE Int Conf Patt Recognit

  8. Dornaika F, El Traboulsi Y, Assoum A (2013) Adaptive two phase sparse representation classifier for face recognition. In: International conference on advanced concepts for intelligent vision systems, pp 182–191. Springer

  9. Dornaika F, El Traboulsi Y, Hernandez C, Assoum A (2013) Self-optimized two phase test sample sparse representation method for image classification. In: 2013 2nd international conference on advances in biomedical engineering (ICABME), pp 163–166. IEEE

  10. Dornaika F, Traboulsi YE, Assoum A (2013) Adaptive two phase sparse representation classifier for face recognition. In: LNCS 8192. Advanced concepts for intelligent vision systems

  11. Fan Z, Ni M, Zhu Q, Sun C, Kang L (2015) L0-norm sparse representation based on modified genetic algorithm for face recognition. J Vis Commun Image Represent 28:15–20

    Google Scholar 

  12. He R, Zheng W-S, Hu B-G, Kong X-W (2011) Nonnegative sparse coding for discriminative semi-supervised learning. In: 2011 IEEE conference on computer vision and pattern recognition (CVPR), pp 2849–2856

  13. He R, Zheng W-S, Hu B-G, Kong X-W (2013) Two-stage nonnegative sparse representation for large-scale face recognition. IEEE Trans Neural Netw Learn Syst 24(1):35–46

    Google Scholar 

  14. http://www.statisticssolutions.com/manova-analysis-paired-sample-t test/

  15. Huang K, Aviyente S (2006) Sparse representation for signal classification. In: NIPS

  16. Jafarpour S, Xu W, Hassibi B, Calderbank R (2009) Efficient and robust compressed sensing using optimized expander graphs. IEEE Trans Inf Theory 55(9):4299–4308

    MathSciNet  MATH  Google Scholar 

  17. Lai Z, Wong W, Xu Y, Yang J, Tang J, Zhang D (2016) Approximate orthogonal sparse embedding for dimensionality reduction. IEEE Trans. Neural Netw Learn Syst 27:723–735

    MathSciNet  Google Scholar 

  18. Lazebnik S, Schmid C, Ponce J (2006) Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In: 2006 IEEE computer society conference on computer vision and pattern recognition, vol 2, pp 2169–2178. IEEE

  19. Li C-G, Guo J, Zhang H-G (2010) Local sparse representation based classification. In: 2010 20th international conference on pattern recognition (ICPR), pp 649–652. IEEE

  20. Li K, Fialho A, Kwong S (2011) Multi-objective differential evolution with adaptive control of parameters and operators. In: International conference on learning and intelligent optimization

  21. Li K, Fialho A, Kwong S, Zhang Q (2014) Adaptive operator selection with bandits for multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 18(1):114–130

    Google Scholar 

  22. Li K, Wang R, Kwong S, Cao J (2013) Evolving extreme learning machine paradigm with adaptive operator selection and parameter control. Int J Uncert Fuzziness Knowl Based Syst 21(02):143–154

    MathSciNet  Google Scholar 

  23. Mena-Torres D, Aguilar-Ruiz J, Rodriguez Y (2012) An instance based learning model for classification in data streams with concept change. In: 2012 11th Mexican international conference on artificial intelligence (MICAI), pp 58–62

  24. Meyer D, Leisch F, Hornik K (2003) The support vector machine under test. Neurocomputing 55:169–186

    Google Scholar 

  25. Nie F, Wang X, Jordan MI, Huang H (2016) The constrained Laplacian rank algorithm for graph-based clustering. In: AAAI conference on artificial intelligence

  26. Ojala T, Pietikäinen M, Harwood D (1996) A comparative study of texture measures with classification based on featured distributions. Pattern Recogn 29(1):51–59

    Google Scholar 

  27. Ougiaroglou S, Nanopoulos A, Papadopoulos AN, Manolopoulos Y, Welzer-Druzovec T (2007) Adaptive k-nearest-neighbor classification using a dynamic number of nearest neighbors. In: Ioannidis Y, Novikov B, Rachev B (eds) Advances in databases and information systems. Berlin, Heidelberg, pp 66–82 Springer Berlin Heidelberg

    Google Scholar 

  28. Quinlan JR (1993) C4.5: programs for machine learning. Morgan Kaufmann Publishers Inc, Burlington

    Google Scholar 

  29. Shi Q, Eriksson A, Hengel A, Shen C (2011) Is face recognition really a compressive sensing problem? IEEE Int Conf Comput Vis Pattern Recognit

  30. Suliman A, Omarov BS (2018) Applying bayesian regularization for acceleration of Levenberg–Marquardt based neural network training. Int J Interact Multimed Artif Intell 5(1):68–72

    Google Scholar 

  31. Sun S, Huang R (2010) An adaptive k-nearest neighbor algorithm. In: 2010 seventh international conference on fuzzy systems and knowledge discovery, vol  1, pp 91–94

  32. Wang J, Lu C, Wang M, Li P, Yan S, Hu X (2014) Robust face recognition via adaptive sparse representation. IEEE Trans Cybern 44(12):2368–2378

    Google Scholar 

  33. Wang J, Yang J, Yu K, Lv F, Huang T, Gong Y (2010) Locality-constrained linear coding for image classification. In: 2010 IEEE conference on computer vision and pattern recognition (CVPR), pp 3360–3367

  34. Waqas J, Yi Z, Zhang L (2013) Collaborative neighbor representation based classification using l2-minimization approach. Pattern Recogn Lett 34:201–208

    Google Scholar 

  35. Wright J, Ma Y, Mairal J, Sapiro G, Huang TS, Yan S (2010) Sparse representation for computer vision and pattern recognition. Proc IEEE 98(6):1031–1044

    Google Scholar 

  36. Wright J, Yang A, Ganesh A, Sastry S, Ma Y (2009) Robust face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell 31(2):210–227

    Google Scholar 

  37. Xiang W, Wang J, Long M (2014) Local hybrid coding for image classification. In: IEEE international conference on pattern recognition

  38. Xu Y, Zhang D, Yang J, Yang J-Y (2011) A two-phase test sample sparse representation method for use with face recognition. IEEE Trans Circuits Syst Video Technol 21(9):1255–1262

    MathSciNet  Google Scholar 

  39. Yang M, Zhang L, Yang J, Zhang D (2011) Robust sparse coding for face recognition. IEEE Int Conf Comput Vis Pattern Recognit

  40. Zhang H (2004) The optimality of naive bayes. In: Barr V, Markov Z (eds) FLAIRS conference. AAAI Press, Palo Alto

    Google Scholar 

  41. Zhang L, Yang M, Feng X (2011) Sparse representation or collaborative representation: which helps face recognition? In: International conference on computer vision

  42. Zhang L, Yang M, Feng X, Ma Y, Zhang D (2012) Collaborative representation based classification for face recognition. CoRR. arxiv:1204.2358

  43. Zhang Y, Zhou G, Jin J, Zhang Y, Wang X, Cichocki A (2017) Sparse bayesian multiway canonical correlation analysis for eeg pattern recognition. Neurocomputing 225:103–110

    Google Scholar 

  44. Zhang Z, Shao L, Xu Y, Liu L, Yang J (2017) Marginal representation learning with graph structure self-adaptation. IEEE Trans Neural Netw Learn Syst

  45. Zhang Z, Xu Y, Yang J, Li X, Zhang D (2015) A survey of sparse representation: algorithms and applications. IEEE Acess 3:490530

    Google Scholar 

  46. Zhu Q, Yuan N, Guan D, Xu N, Li H (2018) An alternative to face image representation and classification. Int J Mach Learn Cybern

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to F. Dornaika.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dornaika, F., El Traboulsi, Y. & Ruichek, Y. Parameter self-tuning schemes for the two phase test sample sparse representation classifier. Int. J. Mach. Learn. & Cyber. 11, 1387–1403 (2020). https://doi.org/10.1007/s13042-019-01045-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-019-01045-x

Keywords

Navigation