skip to main content
10.1145/3630050.3630177acmconferencesArticle/Chapter ViewAbstractPublication PagesconextConference Proceedingsconference-collections
research-article

Explainability-based Metrics to Help Cyber Operators Find and Correct Misclassified Cyberattacks

Published:05 December 2023Publication History

ABSTRACT

Machine Learning (ML)-based Intrusion Detection Systems (IDS) have shown promising performance. However, in a human-centered context where they are used alongside human operators, there is often a need to understand the reasons of a particular decision. EXplainable AI (XAI) has partially solved this issue, but evaluation of such methods is still difficult and often lacking. This paper revisits two quantitative metrics, Completeness and Correctness, to measure the quality of explanations, i.e., if they properly reflect the actual behaviour of the IDS. Because human operators generally have to handle a huge amount of information in limited time, it is important to ensure that explanations do not miss important causes, and that the important features are indeed causes of an event. However, to be more usable, it is better if explanations are compact. For XAI methods based on feature importance, Completeness shows on some public datasets that explanations tend to point out all important causes only with a high number of features, whereas Correctness seem to be highly correlated with prediction results of the IDS. Finally, besides evaluating the quality of XAI methods, Completeness and Correctness seem to enable identification of IDS failures and can be used to point the operator towards suspicious activity missed or misclassified by the IDS, suggesting manual investigation for correction.

References

  1. Jesse Ables, Thomas Kirby, William Anderson, Sudip Mittal, Shahram Rahimi, Ioana Banicescu, and Maria Seale. 2022. Creating an Explainable Intrusion Detection System Using Self Organizing Maps. CoRR (2022).Google ScholarGoogle Scholar
  2. Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity Checks for Saliency Maps. CoRR (2018).Google ScholarGoogle Scholar
  3. Julius Adebayo, Michael Muelly, Ilaria Liccardi, and Been Kim. 2020. Debugging Tests for Model Explanations. CoRR (2020).Google ScholarGoogle Scholar
  4. Chuadhry Mujeeb Ahmed, Venkata Reddy Palleti, and Aditya P. Mathur. 2017. WADI. In Proceedings of the 3rd International Workshop on Cyber-Physical Systems for Smart Water Networks.Google ScholarGoogle Scholar
  5. Yasmeen Alufaisan, Laura R. Marusich, Jonathan Z. Bakdash, Yan Zhou, and Murat Kantarcioglu. 2021. Does Explainable Artificial Intelligence Improve Human Decision-Making? Proceedings of the AAAI Conference on Artificial Intelligence 35, 8 (2021), 6618--6626.Google ScholarGoogle ScholarCross RefCross Ref
  6. Giuseppina Andresini, Annalisa Appice, Francesco Paolo Caforio, Donato Malerba, and Gennaro Vessio. 2022. Roulette: a Neural Attention Multi-Output Model for Explainable Network Intrusion Detection. Expert Systems with Applications 201 (2022), 117144.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Umang Bhatt, Adrian Weller, and José M. F. Moura. 2020. Evaluating and Aggregating Feature-Based Model Explanations. CoRR (2020).Google ScholarGoogle Scholar
  8. Ann-Kathrin Dombrowski, Maximilian Alber, Christopher J. Anders, Marcel Ackermann, Klaus-Robert Müller, and Pan Kessel. 2019. Explanations Can Be Manipulated and Geometry Is To Blame. CoRR (2019).Google ScholarGoogle Scholar
  9. Finale Doshi-Velez and Been Kim. 2017. Towards a Rigorous Science of Interpretable Machine Learning. CoRR (2017).Google ScholarGoogle Scholar
  10. Anna Hedström, Leander Weber, Dilyara Bareeva, Daniel Krakowczyk, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, and Marina M. C. Höhne. 2022. Quantus: an Explainable Ai Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond. CoRR (2022).Google ScholarGoogle Scholar
  11. Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for Explainable Ai: Challenges and Prospects. CoRR (2018).Google ScholarGoogle Scholar
  12. Zakaria Abou El Houda, Bouziane Brik, and Lyes Khoukhi. 2022. ''Why Should I Trust Your Ids?'': an Explainable Deep Learning Framework for Intrusion Detection Systems in Internet of Things Networks. IEEE Open Journal of the Communications Society 3 (2022), 1164--1176.Google ScholarGoogle ScholarCross RefCross Ref
  13. Janet Hui-wen Hsiao, Hilary Hei Ting Ngai, Luyu Qiu, Yi Yang, and Caleb Chen Cao. 2021. Roadmap of Designing Cognitive Metrics for Explainable Artificial Intelligence (XAI). CoRR (2021).Google ScholarGoogle Scholar
  14. Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, and Been Kim. 2017. The (Un)reliability of Saliency Methods. CoRR (2017).Google ScholarGoogle Scholar
  15. Ding Li, Yan Liu, Jun Huang, and Zerui Wang. 2022. A Trustworthy View on XAI Method Evaluation.Google ScholarGoogle Scholar
  16. Zhong Qiu Lin, Mohammad Javad Shafiee, Stanislav Bochkarev, Michael St. Jules, Xiao Yu Wang, and Alexander Wong. 2019. Do Explanations Reflect Decisions? a Machine-Centric Strategy To Quantify the Performance of Explainability Algorithms. CoRR (2019).Google ScholarGoogle Scholar
  17. Scott Lundberg and Su-In Lee. 2017. A Unified Approach To Interpreting Model Predictions. CoRR (2017).Google ScholarGoogle Scholar
  18. Shraddha Mane and Dattaraj Rao. 2021. Explaining Network Intrusion Detection System Using Explainable Ai Framework. CoRR (2021).Google ScholarGoogle Scholar
  19. Tim Miller. 2017. Explanation in Artificial Intelligence: Insights From the Social Sciences. CoRR (2017).Google ScholarGoogle Scholar
  20. Nour Moustafa and Jill Slay. 2015. UNSW-NB15: a comprehensive data set for network intrusion detection systems (UNSW-NB15 network data set). In 2015 Military Communications and Information Systems Conference (MilCIS). 1--6.Google ScholarGoogle ScholarCross RefCross Ref
  21. Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Jörg Schlötterer, Maurice van Keulen, and Christin Seifert. 2022. From Anecdotal Evidence To Quantitative Evaluation Methods: a Systematic Review on Evaluating Explainable Ai. CoRR (2022).Google ScholarGoogle Scholar
  22. Subash Neupane, Jesse Ables, William Anderson, Sudip Mittal, Shahram Rahimi, Ioana Banicescu, and Maria Seale. 2022. Explainable Intrusion Detection Systems (X-IDS): a Survey of Current Methods, Challenges, and Opportunities. CoRR (2022).Google ScholarGoogle Scholar
  23. Gregory Plumb, Marco Tulio Ribeiro, and Ameet Talwalkar. 2021. Finding and Fixing Spurious Patterns With Explanations. CoRR (2021).Google ScholarGoogle Scholar
  24. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ''Why Should I Trust You?'': Explaining the Predictions of Any Classifier. CoRR (2016).Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Cynthia Rudin. 2018. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. CoRR (2018).Google ScholarGoogle Scholar
  26. Max Schemmer, Patrick Hemmer, Maximilian Nitsche, Niklas Kühl, and Michael Vössing. 2022. A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Iman Sharafaldin, Arash Habibi Lashkari, and Ali A. Ghorbani. 2018. Toward Generating a New Intrusion Detection Dataset and Intrusion Traffic Characterization. In Proceedings of the 4th International Conference on Information Systems Security and Privacy. 108--116.Google ScholarGoogle Scholar
  28. Kashif Siddiqui and Thomas E. Doyle. 2022. Trust Metrics for Medical Deep Learning Using Explainable-AI Ensemble for Time Series Classification. In 2022 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE). 370--377.Google ScholarGoogle Scholar
  29. Sédrick Stassin, Alexandre Englebert, Géraldin Nanfack, Julien Albert, Nassim Versbraegen, Gilles Peiffer, Miriam Doh, Nicolas Riche, Benoît Frenay, and Christophe De Vleeschouwer. 2023. An Experimental Investigation Into the Evaluation of Explainability Methods. CoRR (2023).Google ScholarGoogle Scholar
  30. Mateusz Szczepanski, Michal Choras, Marek Pawlicki, and Rafal Kozik. 2020. Achieving Explainability of Intrusion Detection System by Hybrid Oracle-Explainer Approach. In 2020 International Joint Conference on Neural Networks (IJCNN). 1--8.Google ScholarGoogle ScholarCross RefCross Ref
  31. Syed Wali and Irfan Khan. 2021. Explainable AI and Random Forest Based Reliable Intrusion Detection system.Google ScholarGoogle Scholar
  32. Maonan Wang, Kangfeng Zheng, Yanqing Yang, and Xiujuan Wang. 2020. An Explainable Machine Learning Framework for Intrusion Detection Systems. IEEE Access 8 (2020), 73127--73141.Google ScholarGoogle ScholarCross RefCross Ref
  33. Zhibo Zhang, Hussam Al Hamadi, Ernesto Damiani, Chan Yeob Yeun, and Fatma Taher. 2022. Explainable Artificial Intelligence Applications in Cyber Security: State-Of-The-Art in Research. IEEE Access 10 (2022), 93104--93139.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Explainability-based Metrics to Help Cyber Operators Find and Correct Misclassified Cyberattacks

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          SAFE '23: Proceedings of the 2023 on Explainable and Safety Bounded, Fidelitous, Machine Learning for Networking
          December 2023
          37 pages
          ISBN:9798400704499
          DOI:10.1145/3630050

          Copyright © 2023 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 5 December 2023

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
        • Article Metrics

          • Downloads (Last 12 months)43
          • Downloads (Last 6 weeks)11

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader