Skip to main content

Improving Deep Learning Transparency: Leveraging the Power of LIME Heatmap

  • Conference paper
  • First Online:
Service-Oriented Computing – ICSOC 2023 Workshops (ICSOC 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14518))

Included in the following conference series:

  • 111 Accesses

Abstract

Deep learning techniques have recently demonstrated remarkable precision in executing tasks, particularly in image classification. However, their intricate structures make them mysterious even to knowledgeable users, obscuring the rationale behind their decision-making procedures. Therefore, interpreter methodologies have emerged to introduce clarity into these techniques. Among these approaches is the Local Interpretable Model-Agnostic Explanations (LIME), which stands out as a means to enhance comprehensibility. We believe that interpretable deep learning methods have unrealised potential in a variety of application domains, an aspect that has been largely neglected in the existing literature. This research aims to demonstrate the utility of features like the LIME heatmap in advancing classification accuracy within a designated decision-support framework. Real-world contexts take centre stage as we illustrate how the heatmap determines the image segments playing the greatest influence on class scoring. This critical insight empowers users to formulate sensitivity analyses and discover how manipulation of the identified feature could potentially mislead the deep learning classifier. As a second significant contribution, we examine the LIME heatmap data of GoogLeNet and SqueezeNet, two prevalent network models, in an effort to improve the comprehension of these models. Furthermore, we compare LIME with another recognised interpretive method known as Gradient-weighted Class Activation Mapping (Grad-CAM), evaluating their performance comprehensively. Experiments and evaluations conducted on real-world datasets containing images of fish readily demonstrate the superiority of the method, thereby validating our hypothesis.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Stiffler, M., Hudler, A., Lee, E., Braines, D., Mott, D., Harborne, D.: An analysis of reliability using lime with deep learning models. In: Annual Fall Meeting of the Distributed Analytics and Information Science International Technology Alliance, AFM DAIS ITA (2018)

    Google Scholar 

  2. Shah, S.S., Sheppard, J.W.: Evaluating explanations of convolutional neural network image classifications. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2020)

    Google Scholar 

  3. Schallner, L., Rabold, J., Scholz, O., Schmid, U.: Effect of superpixel aggregation on explanations in LIME – a case study with biological data. In: Cellier, P., Driessens, K. (eds.) ECML PKDD 2019. CCIS, vol. 1167, pp. 147–158. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-43823-4_13

    Chapter  Google Scholar 

  4. Cian, D., van Gemert, J., Lengyel, A.: Evaluating the performance of the lime and grad-cam explanation methods on a lego multi-label image classification task. arXiv preprint arXiv:2008.01584 (2020)

  5. Lee, E., Braines, D., Stiffler, M., Hudler, A., Harborne, D.: Developing the sensitivity of lime for better machine learning explanation. In: Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, vol. 11006, pp. 349–356. SPIE (2019)

    Google Scholar 

  6. Hessari, H., Nategh, T.: The role of co-worker support for tackling techno stress along with these influences on need for recovery and work motivation. Int. J. Intell. Property Manage. 12(2), 233–259 (2022)

    Google Scholar 

  7. Ashraf, J., Bakhshi, A.D., Moustafa, N., Khurshid, H., Javed, A., Beheshti, A.: Novel deep learning-enabled LSTM autoencoder architecture for discovering anomalous events from intelligent transportation systems. IEEE Trans. Intell. Transp. Syst. 22(7), 4507–4518 (2020)

    Article  Google Scholar 

  8. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  9. Magesh, P.R., Myloth, R.D., Tom, R.J.: An explainable machine learning model for early detection of Parakinson’s disease using LIME on DaTSCAN imagery. Comput. Biol. Med. 126, 104041 (2020)

    Article  Google Scholar 

  10. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  11. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and \(<\)0.5 mb model size. arXiv preprint arXiv:1602.07360 (2016)

  12. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)

    Google Scholar 

  13. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)

    Article  Google Scholar 

  14. Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 193–209. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_10

    Chapter  Google Scholar 

  15. Eitel, F., et al.: Uncovering convolutional neural network decisions for diagnosing multiple sclerosis on conventional MRI using layer-wise relevance propagation. NeuroImage: Clin. 24, 102003 (2019)

    Article  Google Scholar 

  16. Sun, J., Lapuschkin, S., Samek, W., Binder, A.: Explain and improve: LRP-inference fine-tuning for image captioning models. Inf. Fusion 77, 233–246 (2022)

    Article  Google Scholar 

  17. Gorski, L., Ramakrishna, S., Nowosielski, J.M.: Towards grad-cam based explainability in a legal text processing pipeline. arXiv preprint arXiv:2012.09603 (2020)

  18. Chattopadhay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: Grad-CAM++: generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 839–847. IEEE (2018)

    Google Scholar 

  19. Chen, H., Ji, Y.: Learning variational word masks to improve the interpretability of neural text classifiers. arXiv preprint arXiv:2010.00667 (2020)

  20. Mohseni, S., Block, J.E., Ragan, E.D.: A human-grounded evaluation benchmark for local explanations of machine learning. arXiv preprint arXiv:1801.05075 (2018)

  21. Farhood, H., Saberi, M., Najafi, M.: Improving object recognition in crime scenes via local interpretable model-agnostic explanations. In: 2021 IEEE 25th International Enterprise Distributed Object Computing Workshop (EDOCW), pp. 90–94. IEEE (2021)

    Google Scholar 

  22. Farhood, H., Saberi, M., Najafi, M.: Human-in-the-loop optimization for artificial intelligence algorithms. In: Hacid, H., et al. (eds.) ICSOC 2021. LNCS, vol. 13236, pp. 92–102. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-14135-5_7

    Chapter  Google Scholar 

  23. Matlab-heatmap. https://au.mathworks.com/help/deeplearning/ug/understand-network-predictions-using-lime.html. Accessed 9 Dec 2023

  24. Wikipedia-eel-fish. https://en.wikipedia.org/wiki/American_eel. Accessed 9 Dec 2023

  25. Oh, H.M., Lee, H., Kim, M.Y.: Comparing convolutional neural network (CNN) models for machine learning-based drone and bird classification of anti-drone system. In: 2019 19th International Conference on Control, Automation and Systems (ICCAS), pp. 87–90. IEEE (2019)

    Google Scholar 

  26. Wikipedia-gar-fish. https://en.wikipedia.org/wiki/Gar. Accessed 9 Dec 2023

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Helia Farhood .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Farhood, H., Najafi, M., Saberi, M. (2024). Improving Deep Learning Transparency: Leveraging the Power of LIME Heatmap. In: Monti, F., et al. Service-Oriented Computing – ICSOC 2023 Workshops. ICSOC 2023. Lecture Notes in Computer Science, vol 14518. Springer, Singapore. https://doi.org/10.1007/978-981-97-0989-2_7

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-0989-2_7

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-0988-5

  • Online ISBN: 978-981-97-0989-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics