Abstract
Sparse Representation Classifier (SRC) and its variants were considered as powerful classifiers in the domains of computer vision and pattern recognition. However, classifying test samples is computationally expensive due to the \(\ell _1\) norm minimization problem that should be solved in order to get the sparse code. Therefore, these classifiers could not be the right choice for scenarios requiring fast classification. In order to overcome the expensive computational cost of SRC, a two-phase coding classifier based on classic Regularized Least Square was proposed. This classifier is more efficient than SRC. A significant limitation of this classifier is the fact that the number of the samples that should be handed over to the next coding phase should be specified a priori. This paper overcomes this main limitation and proposes five data-driven schemes allowing an automatic estimation of the optimal size of the local samples. These schemes handle the three cases that are encountered in any learning system: supervised, unsupervised, and semi-supervised. Experiments are conducted on five image datasets. These experiments show that the introduced learning schemes can improve the performance of the two-phase linear coding classifier adopting ad-hoc choices for the number of local samples.
Similar content being viewed by others
References
Aha D, Kibler D, Albert M (1991) Instance-based learning algorithms. Mach Learn 6:37–66
Arun V, Krishna M, BV A, SK P, V S (2018) Exploratory boosted feature selection and neural network framework for depression classification. Int J Interact Multimed Artif Intell 5(3):61–71
Chang X, Ma Z, Lin M, Yang Y, Hauptmann A (2017) Feature interaction augmented sparse learning for fast kinect motion detection. IEEE Trans Image Process
Cheng B, Yang J, Yan S, Fu Y, Huang T (2010) Learning with l1-graph for image analysis. Trans Img Proc 19(4):858–866
Dems̆ar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30
Dornaika F, Bosaghzadeh A (2015) Adaptive graph construction using data self-representativeness for pattern classification. Inf Sci 325:118–139
Dornaika F, Dhabi R, Bosaghzadeh A, Ruichek Y (2016) Dynamic adaptive graph construction: application to graph-based multi-observation classification. IEEE Int Conf Patt Recognit
Dornaika F, El Traboulsi Y, Assoum A (2013) Adaptive two phase sparse representation classifier for face recognition. In: International conference on advanced concepts for intelligent vision systems, pp 182–191. Springer
Dornaika F, El Traboulsi Y, Hernandez C, Assoum A (2013) Self-optimized two phase test sample sparse representation method for image classification. In: 2013 2nd international conference on advances in biomedical engineering (ICABME), pp 163–166. IEEE
Dornaika F, Traboulsi YE, Assoum A (2013) Adaptive two phase sparse representation classifier for face recognition. In: LNCS 8192. Advanced concepts for intelligent vision systems
Fan Z, Ni M, Zhu Q, Sun C, Kang L (2015) L0-norm sparse representation based on modified genetic algorithm for face recognition. J Vis Commun Image Represent 28:15–20
He R, Zheng W-S, Hu B-G, Kong X-W (2011) Nonnegative sparse coding for discriminative semi-supervised learning. In: 2011 IEEE conference on computer vision and pattern recognition (CVPR), pp 2849–2856
He R, Zheng W-S, Hu B-G, Kong X-W (2013) Two-stage nonnegative sparse representation for large-scale face recognition. IEEE Trans Neural Netw Learn Syst 24(1):35–46
http://www.statisticssolutions.com/manova-analysis-paired-sample-t test/
Huang K, Aviyente S (2006) Sparse representation for signal classification. In: NIPS
Jafarpour S, Xu W, Hassibi B, Calderbank R (2009) Efficient and robust compressed sensing using optimized expander graphs. IEEE Trans Inf Theory 55(9):4299–4308
Lai Z, Wong W, Xu Y, Yang J, Tang J, Zhang D (2016) Approximate orthogonal sparse embedding for dimensionality reduction. IEEE Trans. Neural Netw Learn Syst 27:723–735
Lazebnik S, Schmid C, Ponce J (2006) Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In: 2006 IEEE computer society conference on computer vision and pattern recognition, vol 2, pp 2169–2178. IEEE
Li C-G, Guo J, Zhang H-G (2010) Local sparse representation based classification. In: 2010 20th international conference on pattern recognition (ICPR), pp 649–652. IEEE
Li K, Fialho A, Kwong S (2011) Multi-objective differential evolution with adaptive control of parameters and operators. In: International conference on learning and intelligent optimization
Li K, Fialho A, Kwong S, Zhang Q (2014) Adaptive operator selection with bandits for multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 18(1):114–130
Li K, Wang R, Kwong S, Cao J (2013) Evolving extreme learning machine paradigm with adaptive operator selection and parameter control. Int J Uncert Fuzziness Knowl Based Syst 21(02):143–154
Mena-Torres D, Aguilar-Ruiz J, Rodriguez Y (2012) An instance based learning model for classification in data streams with concept change. In: 2012 11th Mexican international conference on artificial intelligence (MICAI), pp 58–62
Meyer D, Leisch F, Hornik K (2003) The support vector machine under test. Neurocomputing 55:169–186
Nie F, Wang X, Jordan MI, Huang H (2016) The constrained Laplacian rank algorithm for graph-based clustering. In: AAAI conference on artificial intelligence
Ojala T, Pietikäinen M, Harwood D (1996) A comparative study of texture measures with classification based on featured distributions. Pattern Recogn 29(1):51–59
Ougiaroglou S, Nanopoulos A, Papadopoulos AN, Manolopoulos Y, Welzer-Druzovec T (2007) Adaptive k-nearest-neighbor classification using a dynamic number of nearest neighbors. In: Ioannidis Y, Novikov B, Rachev B (eds) Advances in databases and information systems. Berlin, Heidelberg, pp 66–82 Springer Berlin Heidelberg
Quinlan JR (1993) C4.5: programs for machine learning. Morgan Kaufmann Publishers Inc, Burlington
Shi Q, Eriksson A, Hengel A, Shen C (2011) Is face recognition really a compressive sensing problem? IEEE Int Conf Comput Vis Pattern Recognit
Suliman A, Omarov BS (2018) Applying bayesian regularization for acceleration of Levenberg–Marquardt based neural network training. Int J Interact Multimed Artif Intell 5(1):68–72
Sun S, Huang R (2010) An adaptive k-nearest neighbor algorithm. In: 2010 seventh international conference on fuzzy systems and knowledge discovery, vol 1, pp 91–94
Wang J, Lu C, Wang M, Li P, Yan S, Hu X (2014) Robust face recognition via adaptive sparse representation. IEEE Trans Cybern 44(12):2368–2378
Wang J, Yang J, Yu K, Lv F, Huang T, Gong Y (2010) Locality-constrained linear coding for image classification. In: 2010 IEEE conference on computer vision and pattern recognition (CVPR), pp 3360–3367
Waqas J, Yi Z, Zhang L (2013) Collaborative neighbor representation based classification using l2-minimization approach. Pattern Recogn Lett 34:201–208
Wright J, Ma Y, Mairal J, Sapiro G, Huang TS, Yan S (2010) Sparse representation for computer vision and pattern recognition. Proc IEEE 98(6):1031–1044
Wright J, Yang A, Ganesh A, Sastry S, Ma Y (2009) Robust face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell 31(2):210–227
Xiang W, Wang J, Long M (2014) Local hybrid coding for image classification. In: IEEE international conference on pattern recognition
Xu Y, Zhang D, Yang J, Yang J-Y (2011) A two-phase test sample sparse representation method for use with face recognition. IEEE Trans Circuits Syst Video Technol 21(9):1255–1262
Yang M, Zhang L, Yang J, Zhang D (2011) Robust sparse coding for face recognition. IEEE Int Conf Comput Vis Pattern Recognit
Zhang H (2004) The optimality of naive bayes. In: Barr V, Markov Z (eds) FLAIRS conference. AAAI Press, Palo Alto
Zhang L, Yang M, Feng X (2011) Sparse representation or collaborative representation: which helps face recognition? In: International conference on computer vision
Zhang L, Yang M, Feng X, Ma Y, Zhang D (2012) Collaborative representation based classification for face recognition. CoRR. arxiv:1204.2358
Zhang Y, Zhou G, Jin J, Zhang Y, Wang X, Cichocki A (2017) Sparse bayesian multiway canonical correlation analysis for eeg pattern recognition. Neurocomputing 225:103–110
Zhang Z, Shao L, Xu Y, Liu L, Yang J (2017) Marginal representation learning with graph structure self-adaptation. IEEE Trans Neural Netw Learn Syst
Zhang Z, Xu Y, Yang J, Li X, Zhang D (2015) A survey of sparse representation: algorithms and applications. IEEE Acess 3:490530
Zhu Q, Yuan N, Guan D, Xu N, Li H (2018) An alternative to face image representation and classification. Int J Mach Learn Cybern
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Dornaika, F., El Traboulsi, Y. & Ruichek, Y. Parameter self-tuning schemes for the two phase test sample sparse representation classifier. Int. J. Mach. Learn. & Cyber. 11, 1387–1403 (2020). https://doi.org/10.1007/s13042-019-01045-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13042-019-01045-x