Skip to main content

Path-Weights and Layer-Wise Relevance Propagation for Explainability of ANNs with fMRI Data

  • Conference paper
  • First Online:
Machine Learning, Optimization, and Data Science (LOD 2023)

Abstract

The application of artificial neural networks (ANNs) to functional magnetic resonance imaging (fMRI) data has recently gained renewed attention for signal analysis, modeling the underlying processes, and knowledge extraction. Although adequately trained ANNs characterize by high predictive performance, the intrinsic models tend to be inscrutable due to their complex architectures. Still, explainable artificial intelligence (xAI) looks to find methods that can help to delve into ANNs’ structures and reveal which inputs most contribute to correct predictions and how the networks unroll calculations until the final decision.

Several methods have been proposed to explain the black-box ANNs’ decisions, with layer-wise relevance propagation (LRP) being the current state-of-the-art. This study aims to investigate the consistency between LRP-based and path-weight-based analysis and how the network’s pruning and retraining processes affect each method in the context of fMRI data analysis.

The procedure is tested with fMRI data obtained in a motor paradigm. Both methods were applied to a fully connected ANN, and to pruned and retrained versions. The results show that both methods agree on the most relevant inputs for each stimulus. The pruning process did not lead to major disagreements. Retraining affected both methods similarly, exacerbating the changes initially observed in the pruning process. Notably, the inputs retained for the ultimate ANN are in accordance with the established neuroscientific literature concerning motor action in the brain, validating the procedure and explaining methods. Therefore, both methods can yield valuable insights for understanding the original fMRI data and extracting knowledge.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hanson, S.J., Matsuka, T., Haxby, J.V.: Combinatorial codes in ventral temporal lobe for object recognition: Haxby (2001) revisited: is there a “face” area? Neuroimage 23, 156–166 (2004). https://doi.org/10.1016/j.neuroimage.2004.05.020

    Article  Google Scholar 

  2. Sona, D., Veeramachaneni, S., Olivetti, E., Avesani, P.: Inferring cognition from fMRI brain images. In: de Sá, J.M., Alexandre, L.A., Duch, W., Mandic, D. (eds.) ICANN 2007. LNCS, vol. 4669, pp. 869–878. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74695-9_89

    Chapter  Google Scholar 

  3. do Espírito Santo, R., Sato, J.R., Martin, M.G.M.: Discriminating brain activated area and predicting the stimuli performed using artificial neural network. Exacta 5, 311–320 (2007). https://doi.org/10.5585/exacta.v5i2.1180

    Article  Google Scholar 

  4. Santos, J.P., Moutinho, L.: Tackling the cognitive processes that underlie brands’ assessments using artificial neural networks and whole brain fMRI acquisitions. In: 2011 IEEE International Workshop on Pattern Recognition in NeuroImaging (PRNI), Seoul, Republic of Korea, pp. 9–12. IEEE Computer Society (2011)

    Google Scholar 

  5. Hacker, C.D., et al.: Resting state network estimation in individual subjects. Neuroimage 82, 616–633 (2013). https://doi.org/10.1016/j.neuroimage.2013.05.108

    Article  Google Scholar 

  6. Thomas, A.W., Heekeren, H.R., Müller, K.-R., Samek, W.: Analyzing neuroimaging data through recurrent deep learning models. Front. Neurosci. 13 (2019). https://doi.org/10.3389/fnins.2019.01321

  7. Liu, M., Amey, R.C., Backer, R.A., Simon, J.P., Forbes, C.E.: Behavioral studies using larges-scale brain networks – methods and validations. Front. Hum. Neurosci. 16 (2022). https://doi.org/10.3389/fnhum.2022.875201

  8. Haynes, J.-D., Rees, G.: Decoding mental states from brain activity in humans. Nat. Rev. Neurosci. 7, 523–534 (2006). https://doi.org/10.1038/nrn1931

    Article  Google Scholar 

  9. Samek, W., Müller, K.-R.: Towards explainable artificial intelligence. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 5–22. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_1

    Chapter  Google Scholar 

  10. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018). https://doi.org/10.1109/ACCESS.2018.2870052

    Article  Google Scholar 

  11. Marques dos Santos, J.P., Moutinho, L., Castelo-Branco, M.: ‘Mind reading’: hitting cognition by using ANNs to analyze fMRI data in a paradigm exempted from motor responses. In: International Workshop on Artificial Neural Networks and Intelligent Information Processing (ANNIIP 2014), Vienna, Austria, pp. 45–52. Scitepress (Science and Technology Publications, Lda.) (2014)

    Google Scholar 

  12. de Oña, J., Garrido, C.: Extracting the contribution of independent variables in neural network models: a new approach to handle instability. Neural Comput. Appl. 25, 859–869 (2014). https://doi.org/10.1007/s00521-014-1573-5

    Article  Google Scholar 

  13. Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 193–209. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_10

    Chapter  Google Scholar 

  14. Clark, P., Matwin, S.: Using qualitative models to guide inductive learning. In: Utgoff, P. (ed.) Proceedings of the Tenth International Conference on International Conference on Machine Learning, ICML 1993, pp. 49–56. Morgan Kaufmann Publishers Inc., University of Massachusetts, Amherst (1993)

    Google Scholar 

  15. Marques dos Santos, J.D., Marques dos Santos, J.P.: Towards XAI: interpretable shallow neural network used to model HCP’s fMRI motor paradigm data. In: Rojas, I., Valenzuela, O., Rojas, F., Herrera, L.J., Ortuño, F. (eds.) IWBBIO 2022. LNCS, vol. 13347, pp. 260–274. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-07802-6_22

    Chapter  Google Scholar 

  16. Marques dos Santos, J.D., Marques dos Santos, J.P.: Path weights analyses in a shallow neural network to reach Explainable Artificial Intelligence (XAI) of fMRI data. In: Nicosia, G., et al. (eds.) Machine Learning, Optimization, and Data Science. LNCS, vol. 13811, pp. 417–431. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-25891-6_31

    Chapter  Google Scholar 

  17. Thomas, A.W., Ré, C., Poldrack, R.A.: Benchmarking explanation methods for mental state decoding with deep learning models. Neuroimage 273, 120109 (2023). https://doi.org/10.1016/j.neuroimage.2023.120109

    Article  Google Scholar 

  18. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10, e0130140 (2015). https://doi.org/10.1371/journal.pone.0130140

    Article  Google Scholar 

  19. Sturm, I., Lapuschkin, S., Samek, W., Müller, K.-R.: Interpretable deep neural networks for single-trial EEG classification. J. Neurosci. Methods 274, 141–145 (2016). https://doi.org/10.1016/j.jneumeth.2016.10.008

    Article  Google Scholar 

  20. Penfield, W., Boldrey, E.: Somatic motor and sensory representation in the cerebral cortex of man as studied by electrical stimulation. Brain 60, 389–443 (1937). https://doi.org/10.1093/brain/60.4.389

    Article  Google Scholar 

  21. Buckner, R.L., Krienen, F.M., Castellanos, A., Diaz, J.C., Yeo, B.T.T.: The organization of the human cerebellum estimated by intrinsic functional connectivity. J. Neurophysiol. 106, 2322–2345 (2011). https://doi.org/10.1152/jn.00339.2011

    Article  Google Scholar 

  22. Yeo, B.T.T., et al.: The organization of the human cerebral cortex estimated by intrinsic functional connectivity. J. Neurophysiol. 106, 1125–1165 (2011). https://doi.org/10.1152/jn.00338.2011

    Article  Google Scholar 

  23. Buckner, R.L.: Event-related fMRI and the hemodynamic response. Hum. Brain Mapp. 6, 373–377 (1998). https://doi.org/10.1002/(SICI)1097-0193(1998)6:5/6<373::AID-HBM8>3.0.CO;2-P

    Article  Google Scholar 

  24. Limas, M.C., et al.: AMORE: A MORE flexible neural network package (0.2-15). León (2014)

    Google Scholar 

  25. R Development Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna (2010)

    Google Scholar 

  26. Bondarenko, A., Borisov, A., Alekseeva, L.: Neurons vs weights pruning in artificial neural networks. In: 10th International Scientific and Practical Conference on Environment. Technologies. Resources, vol. 3, pp. 22–28. Rēzekne Academy of Technologies, Rēzekne (2015)

    Google Scholar 

  27. Koenen, N., Baudeu, R.: Innsight: Get the Insights of your Neural Network (0.2.0) (2023)

    Google Scholar 

Download references

Acknowledgments

This work was partially financially supported by Base Funding - UIDB/00027/2020 of the Artificial Intelligence and Computer Science Laboratory – LIACC - funded by national funds through the FCT/MCTES (PIDDAC).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to José Paulo Marques dos Santos .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Marques dos Santos, J.D., Marques dos Santos, J.P. (2024). Path-Weights and Layer-Wise Relevance Propagation for Explainability of ANNs with fMRI Data. In: Nicosia, G., Ojha, V., La Malfa, E., La Malfa, G., Pardalos, P.M., Umeton, R. (eds) Machine Learning, Optimization, and Data Science. LOD 2023. Lecture Notes in Computer Science, vol 14506. Springer, Cham. https://doi.org/10.1007/978-3-031-53966-4_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-53966-4_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-53965-7

  • Online ISBN: 978-3-031-53966-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics