When Your AI Becomes a Target: AI Security Incidents and Best Practices

Authors

  • Kathrin Grosse EPFL, Switzerland
  • Lukas Bieringer QuantPi, Germany
  • Tarek R. Besold TU Eindhoven, The Netherlands
  • Battista Biggio University of Cagliari, Italy
  • Alexandre Alahi EPFL, Switzerland

DOI:

https://doi.org/10.1609/aaai.v38i21.30347

Keywords:

Multidisciplinary Topics and Applications , Human-Computer Interaction , Machine Learning , Track: AI Incidents and Best Practices (paper)

Abstract

In contrast to vast academic efforts to study AI security, few real-world reports of AI security incidents exist. Released incidents prevent a thorough investigation of the attackers' motives, as crucial information about the company and AI application is missing. As a consequence, it often remains unknown how to avoid incidents. We tackle this gap and combine previous reports with freshly collected incidents to a small database of 32 AI security incidents. We analyze the attackers' target and goal, influencing factors, causes, and mitigations. Many incidents stem from non-compliance with best practices in security and privacy-enhancing technologies. In the case of direct AI attacks, access control may provide some mitigation, but there is little scientific work on best practices. Our paper is thus a call for action to address these gaps.

Published

2024-03-24

How to Cite

Grosse, K., Bieringer, L., Besold, T. R., Biggio, B., & Alahi, A. (2024). When Your AI Becomes a Target: AI Security Incidents and Best Practices. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23041-23046. https://doi.org/10.1609/aaai.v38i21.30347