skip to main content
10.1145/3583780.3614992acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article
Open Access

Non-Uniform Adversarial Perturbations for Discrete Tabular Datasets

Published:21 October 2023Publication History

ABSTRACT

We study the problem of adversarial attack and robustness on tabular datasets with discrete features. The discrete features of a tabular dataset represent high-level meaningful concepts, with different sets of vocabularies, leading to requiring non-uniform robustness. Further, the notion of distance between tabular input instances is not well defined, making the problem of producing adversarial examples with minor perturbations qualitatively more challenging compared to existing methods. Towards this, our paper defines the notion of distance through the lens of feature embeddings, learnt to represent the discrete features. We then formulate the task of generating adversarial examples as abinary set selection problem under non-uniform feature importance. Next, we propose an efficient approximate gradient-descent based algorithm, calledDiscrete Non-uniform Approximation (DNA) attack, by reformulating the problem into a continuous domain to solve the original optimization problem for generating adversarial examples. We demonstrate the effectiveness of our proposed DNA attack using two large real-world discrete tabular datasets from e-commerce domains for binary classification, where the datasets are heavily biased for one-class. We also analyze challenges for existing adversarial training frameworks for such datasets under our DNA attack.

References

  1. Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, and Matthias Hein. 2020. Square attack: a query-efficient black-box adversarial attack via random search. In Computer Vision--ECCV 2020: 16th European Conference, Glasgow, UK, August 23--28, 2020, Proceedings, Part XXIII. Springer, 484--501.Google ScholarGoogle Scholar
  2. Vincent Ballet, Xavier Renard, Jonathan Aigrain, Thibault Laugel, Pascal Frossard, and Marcin Detyniecki. 2019. Imperceptible adversarial attacks on tabular data. arXiv (2019).Google ScholarGoogle Scholar
  3. Philipp Benz, Chaoning Zhang, Adil Karjauv, and In So Kweon. 2021. Revisiting batch normalization for improving corruption robustness. In WACV.Google ScholarGoogle Scholar
  4. N. Carlini and D. Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In IEEE S&P.Google ScholarGoogle Scholar
  5. Francesco Cartella, Orlando Anunciacao, Yuki Funabiki, Daisuke Yamaguchi, Toru Akishita, and Olivier Elshocht. 2021. Adversarial attacks for tabular data: Application to fraud detection and imbalanced data. SafeAI@AAAI (2021).Google ScholarGoogle Scholar
  6. Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. 2018. EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples. In AAAI.Google ScholarGoogle Scholar
  7. Criteo. 2014. Kaggle Display Advertising Challenge Dataset. http://labs.criteo.com/2014/02/kaggle-display-advertising-challenge-dataset/Google ScholarGoogle Scholar
  8. Jesse Davis and Mark Goadrich. 2006. The relationship between Precision-Recall and ROC curves. In Proceedings of the 23rd international conference on Machine learning. 233--240.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. 2018. Boosting Adversarial Attacks with Momentum. In CVPR.Google ScholarGoogle Scholar
  10. Ecenaz Erdemir, Jeffrey Bickford, Luca Melis, and Sergul Aydore. 2021. Adversarial robustness with non-uniform perturbations. NeurIPS (2021).Google ScholarGoogle Scholar
  11. Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In ICLR.Google ScholarGoogle Scholar
  12. Sven Gowal, Chongli Qin, Jonathan Uesato, Timothy Mann, and Pushmeet Kohli. 2020. Unering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples. arXiv (2020).Google ScholarGoogle Scholar
  13. Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. 2017. DeepFM: a factorization-machine based neural network for CTR prediction. IJCAI.Google ScholarGoogle Scholar
  14. F Maxwell Harper and Joseph A Konstan. 2015. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis) (2015).Google ScholarGoogle Scholar
  15. Penghei He, Han Xu, Jie Ren, Yuxuan Wan, Zitao Liu, and Jiliang Tang. 2022. Probabilistic Categorical Adversarial Attack & Adversarial Training. arXiv (2022).Google ScholarGoogle Scholar
  16. Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. 2018. Black-box adversarial attacks with limited queries and information. In International conference on machine learning. PMLR, 2137--2146.Google ScholarGoogle Scholar
  17. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv (2014).Google ScholarGoogle Scholar
  18. Volodymyr Kuleshov, Shantanu Thakoor, Tingfung Lau, and Stefano Ermon. 2018. Adversarial examples for natural language classification problems. (2018).Google ScholarGoogle Scholar
  19. Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2017. Adversarial examples in the physical world.. In Workshop track, ICLR.Google ScholarGoogle Scholar
  20. Qi Lei, Lingfei Wu, Pin-Yu Chen, Alex Dimakis, Inderjit S Dhillon, and Michael J Witbrock. 2019. Discrete adversarial attacks and submodular optimization with applications to text classification. SysML (2019).Google ScholarGoogle Scholar
  21. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).Google ScholarGoogle Scholar
  22. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In ICLR.Google ScholarGoogle Scholar
  23. Yael Mathov, Eden Levy, Ziv Katzir, Asaf Shabtai, and Yuval Elovici. 2022. Not all datasets are born equal: On heterogeneous tabular data and adversarial examples. Knowledge-Based Systems 242 (2022), 108377.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Takeru Miyato, Andrew M Dai, and Ian Goodfellow. 2016. Adversarial training methods for semi-supervised text classification. ICLR (2016).Google ScholarGoogle Scholar
  25. Jay Nandy, Wynne Hsu, and Mong-Li Lee. 2020. Approximate Manifold Defense Against Multiple Adversarial Perturbations. In IJCNN.Google ScholarGoogle Scholar
  26. Jay Nandy, Sudipan Saha, Wynne Hsu, Mong-Li Lee, and Xiao Xiang Zhu. 2021. Towards Bridging the gap between Empirical and Certified Robustness against Adversarial Examples. In arXiv.Google ScholarGoogle Scholar
  27. Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, and Jun Zhu. 2021. Bag of Tricks for Adversarial Training. In ICLR. https://openreview.net/forum?id=Xb8xvrtB8CeGoogle ScholarGoogle Scholar
  28. Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. 2016. Crafting adversarial input sequences for recurrent neural networks. In MILCOM.Google ScholarGoogle Scholar
  29. Martin Pawelczyk, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay, and Himabindu Lakkaraju. 2022. Exploring counterfactual explanations through the lens of adversarial examples: A theoretical and empirical analysis. In AIS-TATS.Google ScholarGoogle Scholar
  30. Chongli Qin, James Martens, Sven Gowal, Dilip Krishnan, Krishnamurthy Dvijotham, Alhussein Fawzi, Soham De, Robert Stanforth, and Pushmeet Kohli. 2019. Adversarial robustness through local linearization. In NeurIPS.Google ScholarGoogle Scholar
  31. Leslie Rice, Eric Wong, and Zico Kolter. 2020. Overfitting in adversarially robust deep learning. In ICML.Google ScholarGoogle Scholar
  32. Lukas Schott, Jonas Rauber, Matthias Bethge, and Wieland Brendel. 2019. Towards the first adversarially robust neural network model on MNIST. In ICLR.Google ScholarGoogle Scholar
  33. Lloyd S Shapley. 1951. Notes on the n-Person Game-II: The Value of an n-Person Game.(1951). Lloyd S Shapley (1951).Google ScholarGoogle Scholar
  34. Weiping Song, Chence Shi, Zhiping Xiao, Zhijian Duan, Yewen Xu, Ming Zhang, and Jian Tang. 2019. Autoint: Automatic feature interaction learning via self-attentive neural networks. In CIKM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In ICLR.Google ScholarGoogle Scholar
  36. Yutong Wang, Yufei Han, Hongyan Bao, Yun Shen, Fenglong Ma, Jin Li, and Xiangliang Zhang. 2020. Attackability characterization of adversarial evasion attack on discrete data. In SIGKDD.Google ScholarGoogle Scholar
  37. Zhiqiang Wang, Qingyun She, and Junlin Zhang. 2021. MaskNet: introducing feature-wise multiplication to CTR ranking models by instance-guided mask. DLP-KDD (2021).Google ScholarGoogle Scholar
  38. Cihang Xie, Mingxing Tan, Boqing Gong, Jiang Wang, Alan L Yuille, and Quoc V Le. 2020. Adversarial examples improve image recognition. In IEEE CVPR.Google ScholarGoogle Scholar
  39. Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, and Michael I Jordan. 2020. Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data. JMLR (2020).Google ScholarGoogle Scholar
  40. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. 2019. Theoretically Principled Trade-off between Robustness and Accuracy. In ICML.Google ScholarGoogle Scholar
  41. Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2018. Generating natural adversarial examples. ICLR (2018).Google ScholarGoogle Scholar
  42. Tianhang Zheng, Changyou Chen, and Kui Ren. 2019. Distributionally adversarial attack. In AAAI.Google ScholarGoogle Scholar

Index Terms

  1. Non-Uniform Adversarial Perturbations for Discrete Tabular Datasets

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CIKM '23: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management
      October 2023
      5508 pages
      ISBN:9798400701245
      DOI:10.1145/3583780

      Copyright © 2023 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 21 October 2023

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate1,861of8,427submissions,22%

      Upcoming Conference

    • Article Metrics

      • Downloads (Last 12 months)148
      • Downloads (Last 6 weeks)46

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader