Skip to main content
Log in

Adversarial attacks and adversarial training for burn image segmentation based on deep learning

  • Original Article
  • Published:
Medical & Biological Engineering & Computing Aims and scope Submit manuscript

Abstract

Deep learning has been widely applied in the fields of image classification and segmentation, while adversarial attacks can impact the model’s results in image segmentation and classification. Especially in medical images, due to constraints from factors like shooting angles, environmental lighting, and diverse photography devices, medical images typically contain various forms of noise. In order to address the impact of these physically meaningful disturbances on existing deep learning models in the application of burn image segmentation, we simulate attack methods inspired by natural phenomena and propose an adversarial training approach specifically designed for burn image segmentation. The method is tested on our burn dataset. Through the defensive training using our approach, the segmentation accuracy of adversarial samples, initially at 54%, is elevated to 82.19%, exhibiting a 1.97% improvement compared to conventional adversarial training methods, while substantially reducing the training time. Ablation experiments validate the effectiveness of individual losses, and we assess and compare training results with different adversarial samples using various metrics.

Graphical abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Algorithm 1
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  1. Kaviani S, Han KJ, Sohn I (2022) Adversarial attacks and defenses on ai in medical imaging informatics: A survey. Expert Syst Appl 198:116815

    Article  Google Scholar 

  2. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S (2017) Dermatologist-level classification of skin cancer with deep neural networks. nature 542(7639):115–118

  3. Kermany DS, Goldbaum M, Cai W, Valentim CC, Liang H, Baxter SL, McKeown A, Yang G, Wu X, Yan F, et al (2018) Identifying medical diagnoses and treatable diseases by image-based deep learning. cell 172(5):1122–1131

  4. Qin Y, Zheng H, Huang X, Yang J, Zhu YM (2019) Pulmonary nodule segmentation with ct sample synthesis using adversarial networks. Med Phys 46(3):1218–1229

    Article  PubMed  Google Scholar 

  5. Liu X, Faes L, Kale AU, Wagner SK, Fu DJ, Bruynseels A, Mahendiran T, Moraes G, Shamdas M, Kern C et al (2019) A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. The lancet digital health 1(6):271–297

    Article  Google Scholar 

  6. Resch TR, Drake RM, Helmer SD, Jost GD, Osland JS (2014) Estimation of burn depth at burn centers in the united states: a survey. Journal of Burn Care & Research 35(6):491–497

    Article  Google Scholar 

  7. Watts A, Tyler M, Perry M, Roberts A, McGrouther D (2001) Burn depth and its histological measurement. Burns 27(2):154–160

    Article  CAS  PubMed  Google Scholar 

  8. Zhang B, Zhou J (2021) Multi-feature representation for burn depth classification via burn images. Artif Intell Med 118:102128

    Article  PubMed  Google Scholar 

  9. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv:1312.6199

  10. Ma X, Niu Y, Gu L, Wang Y, Zhao Y, Bailey J, Lu F (2021) Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recogn 110:107332

    Article  Google Scholar 

  11. Li X, Pan D, Zhu D (2021) Defending against adversarial attacks on medical imaging ai system, classification or detection? In: 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), pp. 1677–1681. IEEE

  12. Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. arXiv:1412.6572

  13. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2017) Towards deep learning models resistant to adversarial attacks. arXiv:1706.06083

  14. Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 Ieee Symposium on Security and Privacy (sp), pp. 39–57. Ieee

  15. Chen L, Bentley P, Mori K, Misawa K, Fujiwara M, Rueckert D (2019) Self-supervised learning for medical image analysis using image context restoration. Med Image Anal 58:101539

    Article  PubMed  PubMed Central  Google Scholar 

  16. Kingma DP, Welling M (2013) Auto-encoding variational bayes. arXiv:1312.6114

  17. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2020) Generative adversarial networks. Commun ACM 63(11):139–144

    Article  Google Scholar 

  18. Wei H, Tang H, Jia X, Wang Z, Yu H, Li Z, Satoh S, Van Gool L, Wang Z (2022) Physical adversarial attack meets computer vision: A decade survey. arXiv:2209.15179

  19. Bai T, Luo J, Zhao J, Wen B, Wang Q (2021) Recent advances in adversarial training for adversarial robustness. arXiv:2102.01356

  20. Hesamian MH, Jia W, He X, Kennedy P (2019) Deep learning techniques for medical image segmentation: achievements and challenges. J Digit Imaging 32:582–596

    Article  PubMed  PubMed Central  Google Scholar 

  21. Xu Q, Tao G, Cheng S, Zhang X (2021) Towards feature space adversarial attack by style perturbation. Proceedings of the AAAI Conference on Artificial Intelligence 35:10523–10531

    Article  Google Scholar 

  22. Cai Z, Xie X, Li S, Yin M, Song C, Krishnamurthy SV, Roy-Chowdhury AK, Asif MS (2022) Context-aware transfer attacks for object detection. Proceedings of the AAAI Conference on Artificial Intelligence 36:149–157

    Article  Google Scholar 

  23. Tan J, Ji N, Xie H, Xiang X (2021) Legitimate adversarial patches: Evading human eyes and detection models in the physical world. In:Proceedings of the 29th ACM International Conference on Multimedia, pp. 5307–5315

  24. Komkov S, Petiushko A (2021) Advhat: Real-world adversarial attack on arcface face id system. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 819–826. IEEE

  25. Zhao Z, Liu Z, Larson M (2020) Towards large yet imperceptible adversarial image perturbations with perceptual color distance. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1039–1048

  26. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2017) Towards deep learning models resistant to adversarial attacks. arXiv:1706.06083

  27. Li Y, Fang EX, Xu H, Zhao T (2020) International conference on learning representations 2020. In: International Conference on Learning Representations 2020

  28. Bai T, Luo J, Zhao J, Wen B, Wang Q (2021) Recent advances in adversarial training for adversarial robustness. arXiv:2102.01356

  29. Ye N, Li Q, Zhou XY, Zhu Z (2021) Amata: An annealing mechanism for adversarial training acceleration. Proceedings of the AAAI Conference on Artificial Intelligence 35:10691–10699

    Article  Google Scholar 

  30. Wang Y, Ma X, Bailey J, Yi J, Zhou B, Gu Q (2021) On the convergence and robustness of adversarial training. arXiv:2112.08304

  31. Song C, He K, Lin J, Wang L, Hopcroft JE (2019) Robust local features for improving the generalization of adversarial training. arXiv:1909.10147

  32. Sriramanan G, Addepalli S, Baburaj A et al (2021) Towards efficient and effective adversarial training. Adv Neural Inf Process Syst 34:11821–11833

    Google Scholar 

  33. Dong Y, Deng Z, Pang T, Zhu J, Su H (2020) Adversarial distributional training for robust deep learning. Adv Neural Inf Process Syst 33:8270–8283

    Google Scholar 

  34. Wang Y, Ma X, Bailey J, Yi J, Zhou B, Gu Q (2021) On the convergence and robustness of adversarial training. arXiv:2112.08304

  35. Jia X, Zhang Y, Wu B, Ma K, Wang J, Cao X (2022) Las-at: adversarial training with learnable attack strategy. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13398–13408

  36. Drozdzal M, Vorontsov E, Chartrand G, Kadoury S, Pal C (2016) The importance of skip connections in biomedical image segmentation. In: International Workshop on Deep Learning in Medical Image Analysis, International Workshop on Large-Scale Annotation of Biomedical Data and Expert Label Synthesis, pp. 179–187. Springer

  37. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778

  38. Wada K, et al (2016) Labelme: Image polygonal annotation with python

  39. Zhong Z, Zheng L, Kang G, Li S, Yang Y (2020) Random erasing data augmentation. Proceedings of the AAAI Conference on Artificial Intelligence 34:13001–13008

    Article  Google Scholar 

  40. Lindeberg T (1990) Scale-space for discrete signals. IEEE Trans Pattern Anal Mach Intell 12(3):234–254

    Article  Google Scholar 

  41. Stein EM, Shakarchi R (2010) Complex Analysis vol. 2. Princeton University Press

  42. Zhang J, Xu X, Han B, Niu G, Cui L, Sugiyama M, Kankanhalli M (2020) Attacks which do not kill training make adversarial learning stronger. In: International Conference on Machine Learning, pp. 11278–11287. PMLR

  43. Singarimbun RN, Nababan EB, Sitompul OS (2019) Adaptive moment estimation to minimize square error in backpropagation algorithm. In: 2019 International Conference of Computer Science and Information Technology (ICoSNIKOM), pp. 1–7 (2019). IEEE

Download references

Funding

This work was supported by Zhejiang Key Research and Development Project (2022C01048).

Author information

Authors and Affiliations

Authors

Contributions

Luying Chen: conceptualization, data curation, methodology, software, investigation, formal analysis, writing — original draft; Jiakai Liang: validation, formal analysis, investigation; Chao Wang: visualization, investigation; Keqiang Yue: supervision, project administration; Wenjun Li: supervision, funding acquisition; Zhihui Fu: resources. All authors have contributed significantly, and all authors are in agreement with the content of the manuscript. All authors have approved the manuscript for submission without any potential competing interests. In addition, the paper has not been submitted to any other journals.

Corresponding author

Correspondence to Keqiang Yue.

Ethics declarations

Conflict of interest

We declare that we have no conflicts of interest to this work. We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted. The contents of this manuscript have not been copyrighted or published previously. The contents of this manuscript are not now under consideration for publication elsewhere. There are no directly related manuscript or abstracts, published or unpublished, by any authors of this manuscript.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, L., Liang, J., Wang, C. et al. Adversarial attacks and adversarial training for burn image segmentation based on deep learning. Med Biol Eng Comput (2024). https://doi.org/10.1007/s11517-024-03098-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11517-024-03098-9

Keywords

Navigation