Skip to main content
Log in

Interpretable Bayesian network abstraction for dimension reduction

  • S.I.: Interpretation of Deep Learning
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Dimension reduction methods is effective for tackling the complexity of models learning from high-dimensional data. Usually, they are presented as a black box, where the reduction process is unknown to the practitioners. Yet, this process potentially transmits a reliable framework for understanding the regularities behind the data. Furthermore, in some applications contexts, the available datasets are presented with a huge lack of records. Therefore, the classical and the deep dimension reduction methods often fall in the over-fitting trap. We propose to tackle these challenges under the Bayesian network paradigm associated with the latent variables learning. We propose an interpretable framework for learning a reduced dimension while ensuring the effectiveness against the curse of dimensionality. Our exhaustive experimental results, over benchmark datasets, prove that our dimension reduction algorithm yields a user-friendly model that not only minimizes the information loss due to the reduction process, but also escapes data overfitting due to the lack of records.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. These datasets are downloadable at: https://archive.ics.uci.edu/ml/datasets.php.

  2. IBNA code is available online via this link: https://github.com/HasnaNjah/IBNA.

References

  1. Oseledets IV, Tyrtyshnikov EE (2009) Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM J Sci Comput 31(5):3744–3759

    MATH  MathSciNet  Google Scholar 

  2. Scott DW (2008) The curse of dimensionality and dimension reduction. Multivar Density Estim Theory Pract Visual 1:195–217

    Google Scholar 

  3. Friedman N, Geiger D, Goldszmidt M (1997) Bayesian network classifiers. Mach Learn 29(2):131–163

    MATH  Google Scholar 

  4. Geiger D, Verma T, Pearl J (1990) D-separation: from theorems to algorithms. Mach Intell Pattern Recogn 10:139–148

    MATH  Google Scholar 

  5. Hausman DM, Woodward J (1999) Independence, invariance and the causal Markov condition. Br J Philos Sci 50(4):521–583

    MATH  MathSciNet  Google Scholar 

  6. Fodor IK (2002) A survey of dimension reduction techniques.:Technical Report UCRL-ID-148494, Lawrence Livermore National Laboratory

  7. Jolliffe IT (2002) Principal component analysis for special types of data. Springer, New York, pp 338–372

    Google Scholar 

  8. Dumais ST (2004) Latent semantic analysis. Ann Rev Inf Sci Technol 38(1):188–230

    Google Scholar 

  9. Spearman C (1904) The proof and measurement of association between two things. Am J Psychol 15(1):72–101

    Google Scholar 

  10. Wang X, Guo B, Shen Y, Zhou C, Duan X (2019) Input feature selection method based on feature set equivalence and mutual information gain maximization. IEEE Access 7:151525–151538

    Google Scholar 

  11. Mahdavi S et al. (2019) A knowledge discovery of relationships among dataset entities using optimum hierarchical clustering by de algorithm. In: 2019 IEEE congress on evolutionary computation (CEC). IEEE

  12. Chowdhury S et al (2017) Botnet detection using graph-based feature clustering. J Big Data 4(1):1–23

    MathSciNet  Google Scholar 

  13. Gandhi SS, Prabhune SS (2017) Overview of feature subset selection algorithm for high dimensional data. In: 2017 International conference on inventive systems and control (ICISC). IEEE

  14. Saracco J, Chavent M, Kuentz V (2010) Clustering of categorical variables around latent variables. No. 2010–02. Groupe de Recherche en Economie Théorique et Appliquée (GREThA)

  15. Chavent M et al (2011) ClustOfVar: an R package for the clustering of variables. arXiv preprint arXiv:1112.0295

  16. Tran B, Xue B, Zhang M (2017) Using feature clustering for GP-based feature construction on high-dimensional data. European conference on genetic programming. Springer, Cham, pp 210–226

    Google Scholar 

  17. Butterworth R, Piatetsky-Shapiro G, Simovici DA (2005) On feature selection through clustering. In: Fifth IEEE International conference on data mining (ICDM'05). IEEE

  18. Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507

    MATH  MathSciNet  Google Scholar 

  19. Kiarashinejad Y, Abdollahramezani S, Adibi A (2020) Deep learning approach based on dimensionality reduction for designing electromagnetic nanostructures. Comput Mater 6(1):1–12

    Google Scholar 

  20. Xu G et al (2019) Bearing fault diagnosis method based on deep convolutional neural network and random forest ensemble learning. Sensors 19(5):1088

    Google Scholar 

  21. Bouhamed H, Masmoudi A, Lecroq T, Rebaï A (2012) A new learning structure heuristic of Bayesian networks from data. International workshop on machine learning and data mining in pattern recognition. Springer, Berlin, Heidelberg, pp 183–197

    Google Scholar 

  22. Chickering DM (1996) Learning Bayesian networks is NP-complete. Learning from data. Springer, New York, pp 121–130

    Google Scholar 

  23. Yu K, Wu X, Ding W, Mu Y, Wang H (2016) Markov blanket feature selection using representative sets. IEEE Trans Neural Netw Learn Syst 28(11):2775–2788

    MathSciNet  Google Scholar 

  24. Cinicioglu EN, Yenilmez T (2016) Determination of variables for a Bayesian network and the most precious one. International conference on information processing and management of uncertainty in knowledge-based systems. Springer, Cham, pp 313–325

    MATH  Google Scholar 

  25. Inza I, Larrañaga P, Etxeberria R, Sierra B (2000) Feature subset selection by Bayesian network-based optimization. Artif Intell 123(1–2):157–184

    MATH  Google Scholar 

  26. Kuschner KW, Malyarenko DI, Cooke WE, Cazares LH, Semmes OJ, Tracy ER (2010) A Bayesian network approach to feature selection in mass spectrometry data. BMC Bioinform 11(1):1–10

    Google Scholar 

  27. Mourad R, Sinoquet C, Leray P (2011) A hierarchical Bayesian network approach for linkage disequilibrium modeling and data-dimensionality reduction prior to genome-wide association studies. BMC Bioinform 12(1):16

    Google Scholar 

  28. Wang Y, Zhang NL, Chen T (2008) Latent tree models and approximate inference in Bayesian networks. J Artif Intell Res 32:879–900

    MATH  MathSciNet  Google Scholar 

  29. Zhang Y, Ji L (2009) Clustering of SNPs by a structural EM algorithm. In 2009 International joint conference on bioinformatics, systems biology and intelligent computing, pp. 147–150. IEEE

  30. Hwang KB, Kim BH, Zhang BT (2006) Learning hierarchical Bayesian networks for large-scale data analysis. International conference on neural information processing. Springer, Berlin, Heidelberg, pp 670–679

    Google Scholar 

  31. Zhang NL, Kocka T (2004) Effective dimensions of hierarchical latent class models. J Artif Intell Res (JAIR) 21:1–17

    MATH  MathSciNet  Google Scholar 

  32. Mourad R et al (2013) A Survey on latent tree models and applications. J Artif Intell Res (JAIR) 47:157–203

    MATH  MathSciNet  Google Scholar 

  33. Witten IH, Frank E (2002) Data mining: practical machine learning tools and techniques with Java implementations. ACM SIGMOD Rec 31(1):76–77

    Google Scholar 

  34. Njah H, Jamoussi S, Mahdi W, Masmoudi A (2015) A new equilibrium criterion for learning the cardinality of latent variables. In: 2015 IEEE 27th International conference on tools with artificial intelligence (ICTAI). IEEE

  35. Bishop CM, Nasrabadi NM (2006) Pattern recognition and machine learning. Springer, New York

    Google Scholar 

  36. Dougherty J, Kohavi R, Sahami M (1995) Supervised and unsupervised discretization of continuous features. Machine Learning Proceedings. Elsevier, New York, pp 194–202

    Google Scholar 

  37. Bareiss ER, Porter BW (1987) A survey of psychological models of concept representation. Artificial Intelligence Laboratory. University of Texas, Austin

    Google Scholar 

  38. Guvenir HA, Acar B, Demiroz G, Cekin A (1997) A supervised machine learning algorithm for arrhythmia analysis. pp. 433–436

  39. Mertins P et al (2016) Proteogenomics connects somatic mutations to signalling in breast cancer. Nature 534(7605):55

    Google Scholar 

  40. Mesejo P et al (2016) Computer-aided classification of gastrointestinal lesions in regular colonoscopy. IEEE Trans Med Imag 35(9):2051–2063

    Google Scholar 

  41. Coates A et al. (2011) Text detection and character recognition in scene images with unsupervised feature learning. pp. 440–445

  42. Dua D, Graff C (2019) UCI Machine Learning Repository

  43. Dias-Ferreira E et al (2009) Chronic stress causes frontostriatal reorganization and affects decision-making. Science 325(5940):621–625

    Google Scholar 

  44. Tsanas A, Little MA, Fox C, Ramig LO (2014) Objective automatic assessment of rehabilitative speech treatment in Parkinson’s disease. IEEE Trans Neural Syst Rehabil Eng 22(1):181–190

    Google Scholar 

  45. MacQueen JB (1967) Some methods for classification and analysis of multivariate observations. University of California Press, pp. 281–297

  46. Johnson SC (1967) Hierarchical clustering schemes. Psychometrika 32(3):241–254

    MATH  Google Scholar 

  47. Balasubramanian M, Schwartz EL (2002) The isomap algorithm and topological stability. Science 295(5552):7–7

    Google Scholar 

  48. Eppstein D, Loffler M, Strash D (2010) Listing all maximal cliques in sparse graphs in near-optimal time. Algorithms and computation. Springer, Berlin, pp 403–414

    Google Scholar 

  49. Liu T et al. (2012) A novel LTM-based method for multi-partition clustering. pp. 203–210

  50. Chen T, Zhang NL, Wang Y (2008) Efficient model evaluation in the search-based approach to latent structure discovery. pp. 57–64

  51. Moon TK (1996) The expectation-maximization algorithm. IEEE Signal Process Magaz 13(6):47–60

    Google Scholar 

  52. Linting M, van der Kooij A (2012) Nonlinear principal components analysis with CATPCA: a tutorial. J Pers Assess 94(1):12–25

    Google Scholar 

  53. Husson F, Josse J (2014) Multiple correspondence analysis. In: Visualization and verbalization of data, pp. 165–184

  54. Weinberger KQ, Saul LK (2006) Unsupervised learning of image manifolds by semidefinite programming. Int J Comput Vision 70(1):77–90

    Google Scholar 

  55. Bartenhagen C et al (2010) Comparative study of unsupervised dimension reduction techniques for the visualization of microarray gene expression data. BMC Bioinform 11(1):567

    Google Scholar 

  56. Sun Y, Todorovic S, Goodison S (2009) Local-learning-based feature selection for high-dimensional data analysis. IEEE Trans Pattern Anal Mach Intell 32(9):1610–1626

    Google Scholar 

  57. Alberto Piatti IDSIA, Marco Zaffalon IDSIA, Marcus Hutter AN (2007) Learning about a categorical latent variable under prior near-ignorance. arXiv preprint arXiv:0705.4312

  58. Scutari M, Ness R (2012) bnlearn: Bayesian network structure learning, parameter learning and inference. R package version, 3

  59. Njah H, Jamoussi S, Mahdi W (2019) Deep Bayesian network architecture for Big Data mining. Concurr Comput Pract Exp 31(2):e4418

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hasna Njah.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interest regarding the present research paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Njah, H., Jamoussi, S. & Mahdi, W. Interpretable Bayesian network abstraction for dimension reduction. Neural Comput & Applic 35, 10031–10049 (2023). https://doi.org/10.1007/s00521-022-07810-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-07810-4

Keywords

Navigation