Abstract
Over the last years, Deep Neural Network models have been recognized as successful in solving many complex problems. However, these methods are mostly focused on the efficiency of final results and rarely provide sufficient evidence and details on factors that contribute to their outcomes This is why a growing demand for analysis techniques appeared. Thanks to visualisation techniques we can if network works as expected or even improve output of given model if possible. Moreover we can use these methods as optimization technique to boost network’s performance but pruning less important neurons. Finally, if we know how a given model works we can prepare a disruption to its work process. This paper shows how we can combine Class Activation Map with feature map to determine a few of the most contributing pixels for given input and modify them to perform an adversarial attack.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Boureau, Y-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10) (2010)
Szyc, K.: An impact of different images color spaces on the efficiency of convolutional neural networks. In: International Conference on Dependability and Complex Systems. Springer, Cham (2019)
Selvaraju, R.R., et al.: Grad-cam: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision (2017)
Springenberg, J.T., et al.: Striving for simplicity: The all convolutional net (2014). arXiv preprint arXiv:1412.6806
Zeiler, M.D., Graham W.T., Rob F.: Adaptive deconvolutional networks for mid and high level feature learning. In: ICCV, vol. 1, No. 2 (2011)
Chattopadhay, A., et al.: Grad-cam ++: generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE (2018)
Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In: Proceedings of the IEEE conference on computer vision and pattern recognition (2015)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv preprint arXiv:1409.1556
He, K., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (2016)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems (2012)
Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition (2015)
Yu, W., Yang, K., Bai, Y., Yao, H., Rui, Y.: Visualizing and Comparing Convolutional Neural Networks (2014)
Evans, R., Grefenstette, E.: Learning explanatory rules from noisy data. J. Artif. Intell. Res. 61, 1–64 (2018)
Lin, M., Chen, Q., Yan, S.: Network in network (2013). arXiv preprint arXiv:1312.4400
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Szandała, T. (2020). Using Convolutional Network Visualisation to Determine the Most Significant Pixels. In: Zamojski, W., Mazurkiewicz, J., Sugier, J., Walkowiak, T., Kacprzyk, J. (eds) Theory and Applications of Dependable Computer Systems. DepCoS-RELCOMEX 2020. Advances in Intelligent Systems and Computing, vol 1173. Springer, Cham. https://doi.org/10.1007/978-3-030-48256-5_61
Download citation
DOI: https://doi.org/10.1007/978-3-030-48256-5_61
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-48255-8
Online ISBN: 978-3-030-48256-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)