Skip to main content
Log in

Towards robust neural networks via a global and monotonically decreasing robustness training strategy

基于全局和单调递减鲁棒性策略的鲁棒神经网络训练方法

  • Research Article
  • Published:
Frontiers of Information Technology & Electronic Engineering Aims and scope Submit manuscript

Abstract

Robustness of deep neural networks (DNNs) has caused great concerns in the academic and industrial communities, especially in safety-critical domains. Instead of verifying whether the robustness property holds or not in certain neural networks, this paper focuses on training robust neural networks with respect to given perturbations. State-of-the-art training methods, interval bound propagation (IBP) and CROWN-IBP, perform well with respect to small perturbations, but their performance declines significantly in large perturbation cases, which is termed “drawdown risk” in this paper. Specifically, drawdown risk refers to the phenomenon that IBP-family training methods cannot provide expected robust neural networks in larger perturbation cases, as in smaller perturbation cases. To alleviate the unexpected drawdown risk, we propose a global and monotonically decreasing robustness training strategy that takes multiple perturbations into account during each training epoch (global robustness training), and the corresponding robustness losses are combined with monotonically decreasing weights (monotonically decreasing robustness training). With experimental demonstrations, our presented strategy maintains performance on small perturbations and the drawdown risk on large perturbations is alleviated to a great extent. It is also noteworthy that our training method achieves higher model accuracy than the original training methods, which means that our presented training strategy gives more balanced consideration to robustness and accuracy.

摘要

深度神经网络的鲁棒性引发了学术界和工业界的高度关注, 特别是在安全攸关领域. 相比于验证神经网络的鲁棒性是否成立, 本文关注点在于给定扰动前提下的鲁棒神经网络训练. 现有的代表性训练方法——区间边界传播(IBP)和CROWN-IBP——在较小扰动下表现良好, 但在较大扰动下性能显著下降, 本文称之为衰退风险. 具体来说, 衰退风险是指与较小扰动情况相比, IBP系列训练方法在较大扰动情况下不能提供预期的鲁棒神经网络的现象. 为了缓解这种衰退风险, 我们提出一种全局的、单调递减的鲁棒神经网络训练策略, 该策略在每个训练轮次考虑多个扰动(全局鲁棒性训练策略), 并将其相应的鲁棒性损失以单调递减的权重进行组织(单调递减鲁棒性训练策略). 实验证明, 所提策略在较小扰动下能够保持原有算法的性能, 在较大扰动下的衰退风险得到很大程度改善. 值得注意的是, 与原有训练方法相比, 所提训练策略保留了更多的模型准确度, 这意味着该训练策略更加平衡地考虑了模型的鲁棒性和准确性.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

  • Balunović M, Baader M, Singh G, et al., 2019. Certifying geometric robustness of neural networks. Proc 33rd Int Conf on Neural Information Processing Systems, Article 1372.

  • Bojarski M, Testa DD, Dworakowski D, et al., 2016. End to end learning for self-driving cars. https://arxiv.org/abs/1604.07316

  • Casadio M, Komendantskaya E, Daggitt ML, et al., 2022. Neural network robustness as a verification property: a principled case study. Proc 34th Int Conf on Computer Aided Verification, p.219–231. https://doi.org/10.1007/978-3-031-13185-1_11

  • Chen XL, He KM, 2021. Exploring simple Siamese representation learning. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.15750–15758. https://doi.org/10.1109/CVPR46437.2021.01549

  • Cohen JM, Rosenfeld E, Kolter JZ, 2019. Certified adversarial robustness via randomized smoothing. Proc 36th Int Conf on Machine Learning, p.1310–1320.

  • Devlin J, Chang MW, Lee K, et al., 2018. BERT: pre-training of deep bidirectional transformers for language understanding. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.4171–4186. https://doi.org/10.18653/v1/N19-1423

  • Du TY, Ji SL, Shen LJ, et al., 2021. Cert-RNN: towards certifying the robustness of recurrent neural networks. Proc ACM SIGSAC Conf on Computer and Communications Security, p.516–534. https://doi.org/10.1145/3460120.3484538

  • Duda RO, Hart PE, Stork DG, 2001. Pattern Classification (2nd Ed.). Wiley, New York, USA.

    MATH  Google Scholar 

  • Dvijotham K, Gowal S, Stanforth R, et al., 2018. Training verified learners with learned verifiers. https://arxiv.org/abs/1805.10265

  • Ehlers R, 2017. Formal verification of piece-wise linear feed-forward neural networks. Proc 15th Int Symp on Automated Technology for Verification and Analysis, p.269–286. https://doi.org/10.1007/978-3-319-68167-2_19

  • Goodfellow IJ, Shlens J, Szegedy C, 2015. Explaining and harnessing adversarial examples. Proc 3rd Int Conf on Learning Representations.

  • Gowal S, Dvijotham K, Stanforth R, et al., 2018. On the effectiveness of interval bound propagation for training verifiably robust models. https://arxiv.org/abs/1810.12715

  • Guo XW, Wan WJ, Zhang ZD, et al., 2021. Eager falsification for accelerating robustness verification of deep neural networks. Proc 32nd IEEE Int Symp on Software Reliability Engineering, p.345–356. https://doi.org/10.1109/ISSRE52982.2021.00044

  • Hein M, Andriushchenko M, 2017. Formal guarantees on the robustness of a classifier against adversarial manipulation. Proc 31st Int Conf on Neural Information Processing Systems, p.2266–2276.

  • Huster T, Chiang CYJ, Chadha R, 2019. Limitations of the Lipschitz constant as a defense against adversarial examples. Proc Joint European Conf on Machine Learning and Knowledge Discovery in Databases, p.16–29. https://doi.org/10.1007/978-3-030-13453-2_2

  • Katz G, Barrett C, Dill DL, et al., 2017. Reluplex: an efficient SMT solver for verifying deep neural networks. Proc 29th Int Conf on Computer Aided Verification, p.97–117. https://doi.org/10.1007/978-3-319-63387-9_5

  • Ko CY, Lyu ZY, Weng L, et al., 2019. POPQORN: quantifying robustness of recurrent neural networks. Proc 36th Int Conf on Machine Learning, p.3468–3477.

  • Lecuyer M, Atlidakis V, Geambasu R, et al., 2019. Certified robustness to adversarial examples with differential privacy. Proc IEEE Symp on Security and Privacy, p.656–672. https://doi.org/10.1109/SP.2019.00044

  • Leino K, Wang ZF, Fredrikson M, 2021. Globally-robust neural networks. Proc 38th Int Conf on Machine Learning, p.6212–6222.

  • Li JL, Liu JC, Yang PF, et al., 2019. Analyzing deep neural networks with symbolic propagation: towards higher precision and faster verification. Proc 26th Int Static Analysis Symp, p.296–319. https://doi.org/10.1007/978-3-030-32304-2_15

  • Liang Z, Liu WW, Wu TR, et al., 2023. Advances and prospects of training methods for robust neural networks. Sci Technol Fores, 2(1):78–89 (in Chinese). https://doi.org/10.3981/j.issn.2097-0781.2023.01.006

    Google Scholar 

  • Liu JX, Xing YH, Shi XM, et al., 2022. Abstraction and refinement: towards scalable and exact verification of neural networks. https://arxiv.org/abs/2207.00759

  • Liu WW, Song F, Zhang THR, et al., 2020. Verifying ReLU neural networks from a model checking perspective. J Comput Sci Technol, 35(6):1365–1381. https://doi.org/10.1007/s11390-020-0546-7

    Article  Google Scholar 

  • Ma L, Juefei-Xu F, Zhang FY, et al., 2018. DeepGauge: multi-granularity testing criteria for deep learning systems. Proc 33rd ACM/IEEE Int Conf on Automated Software Engineering, p.120–131. https://doi.org/10.1145/3238147.3238202

  • Madry A, Makelov A, Schmidt L, et al., 2018. Towards deep learning models resistant to adversarial attacks. Proc 6th Int Conf on Learning Representations.

  • Mirman M, Gehr T, Vechev MT, 2018. Differentiable abstract interpretation for provably robust neural networks. Proc 35th Int Conf on Machine Learning, p.3575–3583.

  • Murphy KP, 2012. Machine Learning: a Probabilistic Perspective. MIT Press, Cambridge, USA.

    MATH  Google Scholar 

  • Ryou W, Chen JY, Balunovic M, et al., 2021. Scalable polyhedral verification of recurrent neural networks. Proc 33rd Int Conf on Computer Aided Verification, p.225–248. https://doi.org/10.1007/978-3-030-81685-8_10

  • Salman H, Yang G, Zhang H, et al., 2019. A convex relaxation barrier to tight robust verification of neural networks. Proc 33rd Int Conf on Neural Information Processing Systems, Article 882.

  • Singh G, Gehr T, Mirman M, et al., 2018. Fast and effective robustness certification. Proc 32nd Int Conf on Neural Information Processing Systems, p.10825–10836.

  • Singh G, Gehr T, Püschel M, et al., 2019. An abstract domain for certifying neural networks. Proc ACM on Programming Languages, p.1–30. https://doi.org/10.1145/3290354

  • Sun B, Sun J, Dai T, et al., 2021. Probabilistic verification of neural networks against group fairness. Proc 24th Int Symp on Formal Methods, p.83–102. https://doi.org/10.1007/978-3-030-90870-6_5

  • Tian Y, Yang WJ, Wang J, 2021. Image fusion using a multilevel image decomposition and fusion method. Appl Opt, 60(24):7466–7479. https://doi.org/10.1364/AO.432397

    Article  Google Scholar 

  • Tjeng V, Xiao KY, Tedrake R, 2019. Evaluating robustness of neural networks with mixed integer programming. Proc 7th Int Conf on Learning Representations.

  • Tran HD, Manzanas Lopez D, Musau P, et al., 2019. Star-based reachability analysis of deep neural networks. Proc 3rd Int Symp on Formal Methods, p.670–686. https://doi.org/10.1007/978-3-030-30942-8_39

  • Wang SQ, Pei KX, Whitehouse J, et al., 2018a. Efficient formal safety analysis of neural networks. Proc 32nd Int Conf on Neural Information Processing Systems, p.6369–6379.

  • Wang SQ, Chen YZ, Abdou A, et al., 2018b. MixTrain: scalable training of formally robust neural networks. https://arxiv.org/abs/1811.02625

  • Weng TW, Zhang H, Chen PY, et al., 2018a. Evaluating the robustness of neural networks: an extreme value theory approach. Proc 6th Int Conf on Learning Representations.

  • Weng TW, Zhang H, Chen HG, et al., 2018b. Towards fast computation of certified robustness for ReLU networks. Proc 35th Int Conf on Machine Learning, p.5273–5282.

  • Wong E, Schmidt FR, Metzen JH, et al., 2018. Scaling provable adversarial defenses. Proc 32nd Int Conf on Neural Information Processing Systems, p.8410–8419.

  • Xiao KY, Tjeng V, Shafiullah NM, et al., 2019. Training for faster adversarial robustness verification via inducing ReLU stability. Proc 7th Int Conf on Learning Representations.

  • Zhang H, Weng TW, Chen PY, et al., 2018. Efficient neural network robustness certification with general activation functions. Proc 32nd Int Conf on Neural Information Processing Systems, p.4944–4953.

  • Zhang H, Chen HG, Xiao CW, et al., 2020. Towards stable and efficient training of verifiably robust neural networks. Proc 8th Int Conf on Learning Representations.

  • Zhang YD, Zhao Z, Chen GK, et al., 2022. QVIP: an ILP-based formal verification approach for quantized neural networks. Proc 37th IEEE/ACM Int Conf on Automated Software Engineering, p.82:1–82:13. https://doi.org/10.1145/3551349.3556916

  • Zhao Z, Zhang YD, Chen GK, et al., 2022. CLEVEREST: accelerating CEGAR-based neural network verification via adversarial attacks. Proc 29th Int Static Analysis Symp, p.449–473. https://doi.org/10.1007/978-3-031-22308-2_20

Download references

Author information

Authors and Affiliations

Authors

Contributions

Zhen LIANG designed the research. Taoran WU, Wanwei LIU, Bai XUE, Wenjing YANG, Ji WANG, and Zhengbin PANG improved the research design. Zhen LIANG and Taoran WU implemented the experiments. Zhen LIANG drafted the paper. Wanwei LIU, Ji WANG, and Taoran WU helped organize the paper. Zhen LIANG revised and finalized the paper.

Corresponding author

Correspondence to Wanwei Liu  (刘万伟).

Ethics declarations

Ji WANG is an editorial board member of Frontiers of Information Technology & Electronic Engineering, and he was not involved with the peer review process of this paper. Zhen LIANG, Taoran WU, Wanwei LIU, Bai XUE, Wenjing YANG, Ji WANG, and Zhengbin PANG declare that they have no conflict of interest.

Additional information

Project supported by the National Key R&D Program of China (No. 2022YFA1005101), the National Natural Science Foundation of China (Nos. 61872371, 62032024, and U19A2062), and the CAS Pioneer Hundred Talents Program, China

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liang, Z., Wu, T., Liu, W. et al. Towards robust neural networks via a global and monotonically decreasing robustness training strategy. Front Inform Technol Electron Eng 24, 1375–1389 (2023). https://doi.org/10.1631/FITEE.2300059

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1631/FITEE.2300059

Key words

关键词

CLC number

Navigation