AnomalyGPT: Detecting Industrial Anomalies Using Large Vision-Language Models

Authors

  • Zhaopeng Gu Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
  • Bingke Zhu Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China Objecteye Inc., Beijing, China
  • Guibo Zhu Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
  • Yingying Chen Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China Objecteye Inc., Beijing, China
  • Ming Tang Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
  • Jinqiao Wang Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China Objecteye Inc., Beijing, China

DOI:

https://doi.org/10.1609/aaai.v38i3.27963

Keywords:

CV: Language and Vision, ML: Multimodal Learning

Abstract

Large Vision-Language Models (LVLMs) such as MiniGPT-4 and LLaVA have demonstrated the capability of understanding images and achieved remarkable performance in various visual tasks. Despite their strong abilities in recognizing common objects due to extensive training datasets, they lack specific domain knowledge and have a weaker understanding of localized details within objects, which hinders their effectiveness in the Industrial Anomaly Detection (IAD) task. On the other hand, most existing IAD methods only provide anomaly scores and necessitate the manual setting of thresholds to distinguish between normal and abnormal samples, which restricts their practical implementation. In this paper, we explore the utilization of LVLM to address the IAD problem and propose AnomalyGPT, a novel IAD approach based on LVLM. We generate training data by simulating anomalous images and producing corresponding textual descriptions for each image. We also employ an image decoder to provide fine-grained semantic and design a prompt learner to fine-tune the LVLM using prompt embeddings. Our AnomalyGPT eliminates the need for manual threshold adjustments, thus directly assesses the presence and locations of anomalies. Additionally, AnomalyGPT supports multi-turn dialogues and exhibits impressive few-shot in-context learning capabilities. With only one normal shot, AnomalyGPT achieves the state-of-the-art performance with an accuracy of 86.1%, an image-level AUC of 94.1%, and a pixel-level AUC of 95.3% on the MVTec-AD dataset.

Published

2024-03-24

How to Cite

Gu, Z., Zhu, B., Zhu, G., Chen, Y., Tang, M., & Wang, J. (2024). AnomalyGPT: Detecting Industrial Anomalies Using Large Vision-Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 1932-1940. https://doi.org/10.1609/aaai.v38i3.27963

Issue

Section

AAAI Technical Track on Computer Vision II