Skip to main content
Log in

On the generation of adversarial examples for image quality assessment

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

We study the generation of adversarial examples to test, assess, and improve deep learning-based image quality assessment (IQA) algorithms. This is important since social media platforms and other providers rely on IQA models to monitor the content they ingest, and to control the quality of pictures that are shared. Unfortunately, IQA models based on deep learning are vulnerable to adversarial attacks. Combining the characteristics of IQA, we analyze several methods of generating adversarial examples in the classification field, and generate adversarial image quality assessment examples by obtaining model gradient information, image pixel information and reconstruction loss function. And we create an adversarial examples image generation tool that generates aggressive adversarial examples having good attack success rates. We hope that it can be used to help IQA researchers assess and improve the robustness of IQA.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data availability

The data that support the findings of this study are available from the corresponding author, Q. S., upon reasonable request.

References

  1. Yan, Q., Gong, D., Zhang, Y.: Two-stream convolutional networks for blind image quality assessment. IEEE Trans. Image Process. 28(5), 2200–2211 (2018)

    Article  MathSciNet  Google Scholar 

  2. Zhang, W., Ma, K., Yan, J., Deng, D., Wang, Z.: Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Trans. Circuits Syst. Video Technol. 30(1), 36–47 (2018)

    Article  Google Scholar 

  3. Ma, K., Liu, W., Zhang, K., Duanmu, Z., Wang, Z., Zuo, W.: End-to-end blind image quality assessment using deep neural networks. IEEE Trans. Image Process. 27(3), 1202–1213 (2017)

    Article  MathSciNet  Google Scholar 

  4. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. arXiv preprint: arXiv:1312.6199 (2013)

  5. Goodfellow, I. J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  6. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In Artificial intelligence safety and security, pp. 99–112 (2018)

  7. Han, J., Ma, Y., Zhou, B., Fan, F., Liang, K., Fang, Y.: A robust infrared small target detection algorithm based on human visual system. IEEE Geosci. Remote Sens. Lett. 11(12), 2168–2172 (2014)

    Article  Google Scholar 

  8. Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., Li, J.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018)

  9. Xiao, C., Li, B., Zhu, J. Y., He, W., Liu, M., Song, D.: Generating Adversarial Examples with Adversarial Networks. (2018). https://doi.org/10.48550/arXiv.1801.02610

  10. Co, K. T., L Muoz-González, Maupeou, S. D., Lupu, E. C.: Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks. (2018). https://doi.org/10.1145/3319535.3345660

  11. Papernot, N., Mcdaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., Swami, A.: The Limitations of Deep Learning in Adversarial Settings. 2016 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE (2015). https://doi.org/10.48550/arXiv.1511.07528

  12. Snoek J., Larochelle H., Adams R P.: Practical Bayesian Optimization of Machine Learning Algorithms. Adv. Neural Inf Process Syst (2012). https://doi.org/10.48550/arXiv.1206.2944

  13. Miyato, T., Dai, A. M., Goodfellow, I.: Adversarial training methods for semi-supervised text classification. arXiv preprint: arXiv:1605.07725 (2016)

  14. Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012)

    Article  Google Scholar 

  15. Su, S., Yan, Q., Zhu, Y., Zhang, C., Ge, X., Sun, J., Zhang, Y.: Blindly assess image quality in the wild guided by a self-adaptive hyper network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3667–3676 (2020)

  16. Ponomarenko, N., Jin, L., Ieremeiev, O., Lukin, V., Egiazarian, K., Astola, J., Kuo, C.C.J.: Image database TID2013: Peculiarities, results and perspectives. Signal Process Image Commun 30, 57–77 (2015)

    Article  Google Scholar 

  17. Sheikh, H. R.: LIVE image quality assessment database release 2. http://live.ece.utexas.edu/research/quality (2005)

  18. Larson, E.C., Chandler, D.M.: Most apparent distortion: full-reference image quality assessment and the role of strategy. J. Electron. Imag. 19(1), 011006 (2010)

    Article  Google Scholar 

  19. Liu, L X., Liu, B., Huang H.: No-reference image quality assessment based on spatial and spectral entropies. Signal Process. Image Commun. (2014)

  20. Moorthy, A.K., Bovik, A.C.: A two-step framework for constructing blind image quality indices. IEEE Signal Process. Lett. 17(5), 513–516 (2010)

    Article  Google Scholar 

  21. Mittal, A., Moorthy, A.K., Bovik, A.C.: No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695–4708 (2012)

    Article  MathSciNet  Google Scholar 

  22. Zhu, H., Li, L., Wu, J., Dong, W., Shi, G.: MetaIQA: Deep meta-learning for no-reference image quality assessment. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14143–14152 (2020)

  23. Liu X L., Van De Weijer J., Bagdanov A D.: RankIQA: learning from rankings for no-reference image quality assessment. In: 2017 IEEE International Conference on Computer Vision, pp.1040–1049 (2017)

  24. You J., Korhonen J.: Transformer for image quality assessment. In:2021 IEEE International Conference on Image Processing (ICIP), pp. 1389–1393 (2021)

  25. Yang, S., Wu, T., Shi, S., Gong, S., Cao, M., Wang, J.: MANIQA: multi-dimension attention network for no-reference image quality assessment. arXiv e-prints (2022)

  26. Sun S., Yu T., Xu J H., Zhou W., Chen Z B.: GraphIQA: Learning Distortion Graph Representations for Blind Image Quality Assessment. CoRR, (2021). https://doi.org/10.48550/arXiv.2103.07666

  27. Ghadiyaram, D., Bovik, A.C.: Massive online crowdsourced study of subjective and objective picture quality. IEEE Trans. Image Process. 25(1), 372–387 (2015)

    Article  MathSciNet  Google Scholar 

  28. Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: an ecologically valid database for deep learning of blind image quality assessment. IEEE Trans. Image Process 29, 4041–4056 (2020)

    Article  Google Scholar 

  29. Wang X., He K.: Enhancing the transferability of adversarial attacks through variance tuning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1924–1933 (2021)

  30. Huang Z., Zhang T.: Black-box adversarial attack with transferable model- based embedding. In: Proceedings of the International Conference on Learning Representations (2020). https://doi.org/10.48550/arXiv.1911.07140

  31. Wang, Z., Tang, Z. R., Zhang, J., Fang, Y.: Learning from Synthetic Data for Opinion-free Blind Image Quality Assessment in the Wild. CoRR, (2021). https://doi.org/10.48550/arXiv.2106.14076

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qingbing Sang.

Ethics declarations

Conflict of interest

We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sang, Q., Zhang, H., Liu, L. et al. On the generation of adversarial examples for image quality assessment. Vis Comput 40, 3183–3198 (2024). https://doi.org/10.1007/s00371-023-03019-1

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-023-03019-1

Keywords

Navigation