Innovations in Explainable AI : Bridging the Gap Between Complexity and Understanding

Authors

  • Keshav Jena  School of Computer Science, MIT World Peace University, Pune, Maharashtra, India

DOI:

https://doi.org//10.32628/CSEIT2390613

Keywords:

Explainable AI, XAI, Interpretable Models, Natural Language Explanations, Model-Agnostic Techniques

Abstract

The integration of Artificial Intelligence (AI) into various domains has witnessed remarkable advancements, yet the opacity of complex AI models poses challenges for widespread acceptance and application. This research paper delves into the field of Explainable AI (XAI) and explores innovative strategies aimed at bridging the gap between the intricacies of advanced AI algorithms and the imperative for human comprehension. We investigate key developments, including interpretable model architectures, local and visual explanation techniques, natural language explanations, and model-agnostic approaches. Emphasis is placed on ethical considerations to ensure transparency and fairness in algorithmic decision-making. By surveying and analyzing these innovations, this research contributes to the ongoing discourse on making AI systems more accessible, accountable, and trustworthy, ultimately fostering a harmonious collaboration between humans and intelligent machines in an increasingly AI-driven world.

References

  1. Doshi-Velez, F., & Kim, B. , “Towards a rigorous science of interpretable machine learning”, 2017
  2. Lundberg, S. M., & Lee, S. I. (2017). "A unified approach to interpreting model predictions. "In Advances in neural information processing systems (pp. 4765-4774).
  3. Mittelstadt, B., Russell, C., & Wachter, S. (2019). "Explaining explanations in AI." In Proceedings of the conference on fairness, accountability, and transparency (pp. 279-288).
  4. Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). "Anchors: High-precision model-agnostic explanations." In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 32, No. 1).
  5. Samek, W., Wiegand, T., & Müller, K. R. (2017). "Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models." ITU Journal: ICT Discoveries, 1(1), 1-16.
  6. Chen, J., Song, L., Wainwright, M. J., & Jordan, M. I. (2018). Learning to explain: An information-theoretic perspective on model interpretation. In Proceedings of the 35th International Conference on Machine Learning (Vol. 80, pp. 883-892).
  7. Gartner. (2017). Top 10 Strategic Technology Trends for 2018. Accessed: Jun. 6, 2018. [Online]. Available: https://www.gartner.com/doc/3811368?srcId=1- 6595640781
  8. Ghorbani A, Abubakar A., Zou J. (2019), Interpretation of neural networks is fragile, Proceedings of the AAAI Conference on Artificial Intelligence, 2019
  9. A. Chander et al., in Proc. MAKE-Explainable AI, 2018.

Downloads

Published

2023-12-30

Issue

Section

Research Articles

How to Cite

[1]
Keshav Jena, " Innovations in Explainable AI : Bridging the Gap Between Complexity and Understanding, IInternational Journal of Scientific Research in Computer Science, Engineering and Information Technology(IJSRCSEIT), ISSN : 2456-3307, Volume 9, Issue 6, pp.118-125, November-December-2023. Available at doi : https://doi.org/10.32628/CSEIT2390613