Automated Natural Language Explanation of Deep Visual Neurons with Large Models (Student Abstract)

Authors

  • Chenxu Zhao Iowa State University
  • Wei Qian Iowa State University
  • Yucheng Shi University of Georgia
  • Mengdi Huai Iowa State University
  • Ninghao Liu University of Georgia

DOI:

https://doi.org/10.1609/aaai.v38i21.30537

Keywords:

Explainable AI, Large Language Models, Applications Of AI, Deep Learning, Interpretation

Abstract

Interpreting deep neural networks through examining neurons offers distinct advantages when it comes to exploring the inner workings of Deep Neural Networks. Previous research has indicated that specific neurons within deep vision networks possess semantic meaning and play pivotal roles in model performance. Nonetheless, the current methods for generating neuron semantics heavily rely on human intervention, which hampers their scalability and applicability. To address this limitation, this paper proposes a novel post-hoc framework for generating semantic explanations of neurons with large foundation models, without requiring human intervention or prior knowledge. Experiments are conducted with both qualitative and quantitative analysis to verify the effectiveness of our proposed approach.

Published

2024-03-24

How to Cite

Zhao, C., Qian, W., Shi, Y., Huai, M., & Liu, N. (2024). Automated Natural Language Explanation of Deep Visual Neurons with Large Models (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23712-23713. https://doi.org/10.1609/aaai.v38i21.30537