Skip to main content

Path Weights Analyses in a Shallow Neural Network to Reach Explainable Artificial Intelligence (XAI) of fMRI Data

  • Conference paper
  • First Online:
Machine Learning, Optimization, and Data Science (LOD 2022)

Abstract

A new procedure is proposed to simplify a Shallow Neural Network. With such aim, we introduce the concept of “path weights”, which consists of the multiplication of the successive weights along a single path from the input layer to the output layer. This concept is used alongside a direct analysis of the calculations performed by a neural network to simplify a neural network via removing paths determined to be not relevant for decision making.

This study compares the proposed “path weights” method for network lightening with other methods, taken from input ranking, namely the Garson, and Paliwal and Kumar.

All the different methods of network lightening reduce the network complexity, favoring interpretability and keeping the prediction at high levels, well above randomness. However, Garson’s and Paliwal and Kumar’s methods maintain the totality of the weights between the hidden layer and the output layer, which keeps the number of connections high and reduces the capability of analyzing hidden node importance in the network’s decision-making. Yet, the proposed method based on “path weights” provides lighter networks while retaining a high prediction rate, thus providing a high hit per network connection ratio. Moreover, the lightened neural network inputs are coherent with the established literature, identifying the brain regions that participate in motor execution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.humanconnectome.org/study/hcp-young-adult.

  2. 2.

    https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FSL.

  3. 3.

    https://www.r-project.org/.

  4. 4.

    https://www.rstudio.com/.

References

  1. Haynes, J.-D., Rees, G.: Decoding mental states from brain activity in humans. Nat. Rev. Neurosci. 7, 523–534 (2006). https://doi.org/10.1038/nrn1931

    Article  Google Scholar 

  2. Hanson, S.J., Matsuka, T., Haxby, J.V.: Combinatorial codes in ventral temporal lobe for object recognition: Haxby (2001) revisited: is there a “face” area? Neuroimage 23, 156–166 (2004). https://doi.org/10.1016/j.neuroimage.2004.05.020

    Article  Google Scholar 

  3. Onut, I.-V., Ghorbani, A.A.: Classifying cognitive states from fMRI data using neural networks. In: Proceedings. 2004 IEEE International Joint Conference on Neural Networks, pp. 2871–2875 (2004). https://doi.org/10.1109/IJCNN.2004.1381114

  4. Sona, D., Veeramachaneni, S., Olivetti, E., Avesani, P.: Inferring cognition from fMRI brain images. In: de Sá, J.M., Alexandre, L.A., Duch, W., Mandic, D. (eds.) ICANN 2007. LNCS, vol. 4669, pp. 869–878. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74695-9_89

    Chapter  Google Scholar 

  5. Marques dos Santos, J.P., Moutinho, L., Castelo-Branco, M.: ‘Mind reading’: hitting cognition by using ANNs to analyze fMRI data in a paradigm exempted from motor responses. In: International Workshop on Artificial Neural Networks and Intelligent Information Processing (ANNIIP 2014), pp. 45–52. Scitepress (Science and Technology Publications, Lda.), Vienna, Austria (2014). https://doi.org/10.5220/0005126400450052

  6. Weygandt, M., Stark, R., Blecker, C., Walter, B., Vaitl, D.: Real-time fMRI pattern-classification using artificial neural networks. Clin. Neurophysiol. 118, e114–e114 (2007). https://doi.org/10.1016/j.clinph.2006.11.265

    Article  Google Scholar 

  7. de Oña, J., Garrido, C.: Extracting the contribution of independent variables in neural network models: a new approach to handle instability. Neural Comput. Appl. 25(3–4), 859–869 (2014). https://doi.org/10.1007/s00521-014-1573-5

    Article  Google Scholar 

  8. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018). https://doi.org/10.1109/ACCESS.2018.2870052

    Article  Google Scholar 

  9. Samek, W., Müller, K.-R.: Towards explainable artificial intelligence. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 5–22. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_1

    Chapter  Google Scholar 

  10. Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (XAI): toward medical XAI. IEEE Trans. Neural Netw. Learn. Syst. 32, 4793–4813 (2021). https://doi.org/10.1109/TNNLS.2020.3027314

    Article  Google Scholar 

  11. Blalock, D., Gonzalez Ortiz, J.J., Frankle, J., Guttag, J.: What is the state of neural network pruning? In: Dhillon, I., Papailiopoulos, D., Sze, V. (eds.) 3rd Conference on Machine Learning and Systems, MLSys 2020, vol. 2, pp. 129-146, Austin (TX), USA (2020)

    Google Scholar 

  12. Zhao, F., Zeng, Y.: Dynamically optimizing network structure based on synaptic pruning in the brain. Frontiers in Systems Neuroscience 15, 620558 (2021). https://doi.org/10.3389/fnsys.2021.620558

  13. Mirkes, E.M.: Artificial neural network pruning to extract knowledge. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2020). https://doi.org/10.1109/IJCNN48605.2020.9206861

  14. Olden, J.D., Joy, M.K., Death, R.G.: An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. Ecol. Model. 178, 389–397 (2004). https://doi.org/10.1016/j.ecolmodel.2004.03.013

    Article  Google Scholar 

  15. Garson, D.G.: Interpreting neural network connection weights. AI Expert 6, 46–51 (1991)

    Google Scholar 

  16. Paliwal, M., Kumar, U.A.: Assessing the contribution of variables in feed forward neural network. Appl. Soft Comput. 11, 3690–3696 (2011). https://doi.org/10.1016/j.asoc.2011.01.040

    Article  Google Scholar 

  17. Fischer, A.: How to determine the unique contributions of input-variables to the nonlinear regression function of a multilayer perceptron. Ecol. Model. 309–310, 60–63 (2015). https://doi.org/10.1016/j.ecolmodel.2015.04.015

    Article  Google Scholar 

  18. de Sá, C.R.: Variance-based feature importance in neural networks. In: Kralj Novak, P., Šmuc, T., Džeroski, S. (eds.) Discovery Science. Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence), vol. 11828, pp. 306–315. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33778-0_24

    Chapter  Google Scholar 

  19. Luíza da Costa, N., Dias de Lima, M., Barbosa, R.: Evaluation of feature selection methods based on artificial neural network weights. Expert Syst. Appl. 168, 114312 (2021). https://doi.org/10.1016/j.eswa.2020.114312

  20. Bondarenko, A., Borisov, A., Alekseeva, L.: Neurons vs weights pruning in artificial neural networks. In: 10th International Scientific and Practical Conference on Environment. Technologies. Resources, vol. 3, pp. 22–28. Rēzekne Academy of Technologies, Rēzekne (2015)

    Google Scholar 

  21. Karnin, E.D.: A simple procedure for pruning back-propagation trained neural networks. IEEE Trans. Neural Netw. 1, 239–242 (1990). https://doi.org/10.1109/72.80236

    Article  Google Scholar 

  22. Penfield, W., Boldrey, E.: Somatic motor and sensory representation in the cerbral cortex of man as studied by electrical stimulation. Brain 60, 389–443 (1937). https://doi.org/10.1093/brain/60.4.389

    Article  Google Scholar 

  23. Yeo, B.T.T., et al.: The organization of the human cerebral cortex estimated by intrinsic functional connectivity. J. Neurophysiol. 106, 1125–1165 (2011). https://doi.org/10.1152/jn.00338.2011

    Article  Google Scholar 

  24. Van Essen, D.C., Glasser, M.F.: The human connectome project: progress and prospects. In: Cerebrum 2016, cer-10–16 (2016)

    Google Scholar 

  25. Van Essen, D.C., Smith, S.M., Barch, D.M., Behrens, T.E.J., Yacoub, E., Ugurbil, K.: The WU-Minn human connectome project: an overview. Neuroimage 80, 62–79 (2013). https://doi.org/10.1016/j.neuroimage.2013.05.041

    Article  Google Scholar 

  26. Elam, J.S., et al.: The human connectome project: a retrospective. Neuroimage 244, 118543 (2021). https://doi.org/10.1016/j.neuroimage.2021.118543

    Article  Google Scholar 

  27. Buckner, R.L., Krienen, F.M., Castellanos, A., Diaz, J.C., Yeo, B.T.T.: The organization of the human cerebellum estimated by intrinsic functional connectivity. J. Neurophysiol. 106, 2322–2345 (2011). https://doi.org/10.1152/jn.00339.2011

    Article  Google Scholar 

  28. Beckmann, C.F., Smith, S.M.: Probabilistic independent component analysis for functional magnetic resonance imaging. IEEE Trans. Med. Imaging 23, 137–152 (2004). https://doi.org/10.1109/TMI.2003.822821

    Article  Google Scholar 

  29. Minka, T.P.: Automatic choice of dimensionality for PCA. Technical Report 514, MIT Media Lab Vision and Modeling Group. MIT (2000)

    Google Scholar 

  30. Hyvärinen, A.: Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. Neural Netw. 10, 626–634 (1999). https://doi.org/10.1109/72.761722

    Article  Google Scholar 

  31. Buckner, R.L.: Event-related fMRI and the hemodynamic response. Hum. Brain Mapp. 6, 373–377 (1998). https://doi.org/10.1002/(SICI)1097-0193(1998)6:5/6%3c373::AID-HBM8%3e3.0.CO;2-P

    Article  Google Scholar 

  32. Limas, M.C., Meré, J.B.O., Marcos, A.G., de Pisón Ascacibar, F., Espinoza, A.P., Elías, F.A.: A MORE flexible neural network package (0.2–12). León (2010)

    Google Scholar 

  33. R Development Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna (2010)

    Google Scholar 

  34. Haykin, S.: Neural Networks and Learning Machines. Prentice Hall, New Jersey (2009)

    Google Scholar 

  35. Le Cun, Y.: Efficient learning and second-order methods. Tutorial NIPS 93, 61 (1993)

    Google Scholar 

Download references

Acknowledgements

This work was partially financially supported by Base Funding - UIDB/00027/2020 of the Artificial Intelligence and Computer Science Laboratory – LIACC - funded by national funds through the FCT/MCTES (PIDDAC).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to José Paulo Marques dos Santos .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary file1 (PDF 1314 kb)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Marques dos Santos, J.D., Marques dos Santos, J.P. (2023). Path Weights Analyses in a Shallow Neural Network to Reach Explainable Artificial Intelligence (XAI) of fMRI Data. In: Nicosia, G., et al. Machine Learning, Optimization, and Data Science. LOD 2022. Lecture Notes in Computer Science, vol 13811. Springer, Cham. https://doi.org/10.1007/978-3-031-25891-6_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-25891-6_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-25890-9

  • Online ISBN: 978-3-031-25891-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics