Explaining image classification model (CNN-based) predictions with lime

Eray ÖNLER *

Department of Biosystem Engineering, Faculty of Agriculture, Tekirdag Namik Kemal University, Tekirdag, Turkiye.
 
Research Article
World Journal of Advanced Engineering Technology and Sciences, 2022, 07(02), 275-280.
Article DOI: 10.30574/wjaets.2022.7.2.0176
Publication history: 
Received on 17 November 2022; revised on 26 December 2022; accepted on 29 December 2022
 
Abstract: 
Convolutional neural network models are black-box methods. The black-box in artificial intelligence means that model insights are based on the dataset, but the user does not know how. However, to obtain better classification models, it is important to monitor and understand how these models make decisions. In this way, classification success can be increased and improved. In this study, we examine how a model that classifies the disease in cassava leaves focuses on the image when making a decision. We used the cassava leaf dataset of five classes: Cassava Bacterial Blight, Cassava Brown Streak Disease (cbsd), Cassava Greem Mite (cgm), and Cassava Mosaic Disease (cmd). We used imagenet-trained Xception architecture for the base model to be used in transfer learning. The LIME library v 0.2.0 was used to examine which parts of the image affect the predictions made with the CNN model. With the LimeImageExplainer function, the superpixels are divided according to the weights of the model and then visualized. We can visually understand how the model decides on the predictions.
 
Keywords: 
Convolutional neural networks; Explainable AI; Artificial intelligence; Computer vision
 
Full text article in PDF: