Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding (Extended Abstract)

Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding (Extended Abstract)

Zhilu Wang, Chao Huang, Qi Zhu

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Sister Conferences Best Papers. Pages 6498-6503. https://doi.org/10.24963/ijcai.2023/727

The robustness of deep neural networks in safety-critical systems has received significant interest recently, which measures how sensitive the model output is under input perturbations. While most previous works focused on the local robustness property, the studies of the global robustness property, i.e., the robustness in the entire input space, are still lacking. In this work, we formulate the global robustness certification problem for ReLU neural networks and present an efficient approach to address it. Our approach includes a novel interleaving twin-network encoding scheme and an over-approximation algorithm leveraging relaxation and refinement techniques. Its timing efficiency and effectiveness are evaluated and compared with other state-of-the-art global robustness certification methods, and demonstrated via case studies on practical applications.
Keywords:
Sister Conferences Best Papers: Uncertainty in AI
Sister Conferences Best Papers: Multidisciplinary Topics and Applications
Sister Conferences Best Papers: Machine Learning