Skip to main content
Log in

Securing recommender system via cooperative training

  • Published:
World Wide Web Aims and scope Submit manuscript

Abstract

Recommender systems are often susceptible to well-crafted fake profiles, leading to biased recommendations. Among existing defense methods, data-processing-based methods inevitably exclude normal samples, while model-based methods struggle to enjoy both generalization and robustness. To this end, we suggest integrating data processing and the robust model to propose a general framework, Triple Cooperative Defense (TCD), which employs three cooperative models that mutually enhance data and thereby improve recommendation robustness. Furthermore, Considering that existing attacks struggle to balance bi-level optimization and efficiency, we revisit poisoning attacks in recommender systems and introduce an efficient attack strategy, Co-training Attack (Co-Attack), which cooperatively optimizes the attack optimization and model training, considering the bi-level setting while maintaining attack efficiency. Moreover, we reveal a potential reason for the insufficient threat of existing attacks is their default assumption of optimizing attacks in undefended scenarios. This overly optimistic setting limits the potential of attacks. Consequently, we put forth a Game-based Co-training Attack (GCoAttack), which frames the proposed CoAttack and TCD as a game-theoretic process, thoroughly exploring CoAttack’s attack potential in the cooperative training of attack and defense. Extensive experiments on three real datasets demonstrate TCD’s superiority in enhancing model robustness. Additionally, we verify that the two proposed attack strategies significantly outperform existing attacks, with game-based GCoAttack posing a greater poisoning threat than CoAttack.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Fig. 2
Algorithm 2
Algorithm 3
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Availability of data and material

The source code and data are available at https://github.com/greensun0830/Cotraining-Attack.

Notes

  1. https://www.librec.net/datasets/flmtrust.zip

  2. https://grouplens.org/datasets/movielens

  3. https://grouplens.org/datasets/movielens

References

  1. Bobadilla, J., Ortega, F., Hernando, A., Gutiérrez, A.: Recommender systems survey. Knowl.-Based Syst. 46, 109–132 (2013)

    Article  Google Scholar 

  2. Himeur, Y., Sayed, A., Alsalemi, A., Bensaali, F., Amira, A., Varlamis, I., Eirinaki, M., Sardianos, C., Dimitrakopoulos, G.: Blockchain-based recommender systems: Applications, challenges and future opportunities. Comput. Sci. Rev. 43, 100439 (2022)

    Article  MathSciNet  Google Scholar 

  3. Lian, D., Wu, Y., Ge, Y., Xie, X., Chen, E.: Geography-aware sequential location recommendation. In: Proceedings of KDD’20, pp. 2009–2019. (2020)

  4. Chevalier, J.A., Mayzlin, D.: The effect of word of mouth on sales: Online book reviews. J. Mark. Res. 43(3), 345–354 (2006)

    Article  Google Scholar 

  5. Wu, C., Lian, D., Ge, Y., Zhu, Z., Chen, E.: Triple adversarial learning for influence based poisoning attack in recommender systems. In: Proceedings of KDD’21, pp. 1830–1840. (2021)

  6. Li, B., Wang, Y., Singh, A., Vorobeychik, Y.: Data poisoning attacks on factorization-based collaborative filtering. NIPS 29, 1885–1893 (2016)

    Google Scholar 

  7. Lin, C., Chen, S., Li, H., Xiao, Y., Li, Q. Lianyun and Yang: Attacking recommender systems with augmented user profiles. In: CIKM, pp. 855–864. (2020)

  8. Liu, H., Hu, Z., Mian, A., Tian, H., Zhu, X.: A new user similarity model to improve the accuracy of collaborative filtering. KBS 56, 156–166 (2014)

    Google Scholar 

  9. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv (2017)

  10. Wu, C., Lian, D., Ge, Y., Zhu, Z., Chen, E., Yuan, S.: Fight fire with fire: Towards robust recommender systems via adversarial poisoning training. In: SIGIR, pp. 1074–1083. (2021)

  11. Nguyen Thanh, T., Quach, N.D.K., Nguyen, T.T., Huynh, T.T., Vu, V.H., Nguyen, P.L., Jo, J., Nguyen, Q.V.H.: Poisoning GNN-based recommender systems with generative surrogate-based attacks. ACM Trans. Inf. Syst. 41(3), 1–24 (2023)

    Article  Google Scholar 

  12. Lam, S.K., Riedl, J.: Shilling recommender systems for fun and profit. In: WWW, pp. 393–402. (2004)

  13. Burke, R., Mobasher, B., Williams, C., Bhaumik, R.: Classification features for attack detection in collaborative recommender systems. In: KDD, pp. 542–547. (2006)

  14. Cohen, R., Sar Shalom, O., Jannach, D., Amir, A.: A black-box attack model for visually-aware recommender systems. In: Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pp. 94–102. (2021)

  15. Yue, Z., He, Z., Zeng, H., McAuley, J.: Black-box attacks on sequential recommenders via data-free model extraction. In: Proceedings of the 15th ACM Conference on Recommender Systems, pp. 44–54. (2021)

  16. Zhang, S., Yin, H., Chen, T., Huang, Z., Nguyen, Q.V.H., Cui, L.: Pipattack: poisoning federated recommender systems for manipulating item promotion. In: Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pp. 1415–1423. (2022)

  17. Fang, M., Yang, G., Gong, N.Z., Liu, J.: Poisoning attacks to graph-based recommender systems. In: Proceedings of the 34th Annual Computer Security Applications Conference, pp. 381–392. (2018)

  18. Huang, H., Mu, J., Gong, N.Z., Li, Q., Liu, B., Xu, M.: Data poisoning attacks to deep learning based recommender systems. http://arxiv.org/abs/2101.02644 (2021)

  19. Wang, Q., Lian, D., Wu, C., Chen, E.: Towards robust recommender systems via triple cooperative defense. In: Web Information Systems Engineering–WISE 2022: 23rd International Conference, Biarritz, France, November 1–3, 2022, Proceedings, pp. 564–578. Springer (2022)

  20. Du, Y., Fang, M., Yi, J., Xu, C., Cheng, J., Tao, D.: Enhancing the robustness of neural collaborative filtering systems under malicious attacks. IEEE Trans. Multimedia 21(3), 555–565 (2018)

    Article  Google Scholar 

  21. Si, M., Li, Q.: Shilling attacks against collaborative recommender systems: a review. Artif. Intell. Rev. 53(1), 291–319 (2020)

    Article  MathSciNet  Google Scholar 

  22. Ovaisi, Z., Heinecke, S., Li, J., Zhang, Y., Zheleva, E., Xiong, C.: Rgrecsys: A toolkit for robustness evaluation of recommender systems. arXiv (2022)

  23. Chen, L., Xu, Y., Xie, F., Huang, M., Zheng, Z.: Data poisoning attacks on neighborhood-based recommender systems. Trans. Emerg. Telecommun. Technol. 32(6), 3872 (2021)

    Article  Google Scholar 

  24. Guo, H., Tang, R., Ye, Y., Li, Z., He, X.: DeepFM: a factorization-machine based neural network for CTR prediction. arXiv (2017)

  25. He, X., Liao, L., Zhang, H., Nie, L., Hu, X., Chua, T.-S.: Neural collaborative filtering. In: WWW, pp. 173–182. (2017)

  26. Fang, M., Gong, N.Z., Liu, J.: Influence function based data poisoning attacks to top-n recommender systems. In: Proceedings of The Web Conference 2020, pp. 3019–3025. (2020)

  27. Tang, J., Wen, H., Wang, K.: Revisiting adversarially learned injection attacks against recommender systems. In: RecSys, pp. 318–327. (2020)

  28. Jin, B., Lian, D., Liu, Z., Liu, Q., Ma, J., Xie, X., Chen, E.: Sampling-decomposable generative adversarial recommender. Adv. Neur. In. 33, 22629–22639 (2020)

    Google Scholar 

  29. Christakopoulou, K., Banerjee, A.: Adversarial attacks on an oblivious recommender. In: RecSys, pp. 322–330. (2019)

  30. Li, C., Xu, T., Zhu, J., Zhang, B.: Triple generative adversarial nets. Adv. Neural Inf. Proces. Syst. 30 (2017)

  31. Yang, G., Gong, N.Z., Cai, Y.: Fake co-visitation injection attacks to recommender systems. In: NDSS. (2017)

  32. Oh, S., Kumar, S.: Robustness of deep recommendation systems to untargeted interaction perturbations. arXiv (2022)

  33. Fan, W., Derr, T., Zhao, X., Ma, Y., Liu, H., Wang, J., Tang, J., Li, Q.: Attacking black-box recommendations via copying cross-domain user profiles. In: ICDE, pp. 1583–1594. IEEE (2021)

  34. Song, J., Li, Z., Hu, Z., Wu, Y., Li, Z., Li, J., Gao, J.: PoisonRec: an adaptive data poisoning framework for attacking black-box recommender systems. In: ICDE, pp. 157–168. IEEE (2020)

  35. Deldjoo, Y., Noia, T.D., Merra, F.A.: A survey on adversarial recommender systems: from attack/defense strategies to generative adversarial networks. CSUR 54(2), 1–38 (2021)

    Article  Google Scholar 

  36. Yang, Z., Xu, L., Cai, Z., Xu, Z.: Re-scale AdaBoost for attack detection in collaborative filtering recommender systems. Knowl.-Based Syst. 100, 74–88 (2016)

    Article  Google Scholar 

  37. Ge, Y., Liu, S., Fu, Z., Tan, J., Li, Z., Xu, S., Li, Y., Xian, Y., Zhang, Y.: A survey on trustworthy recommender systems. http://arxiv.org/abs/2207.12515 (2022)

  38. Zhang, Y., Tan, Y., Zhang, M., Liu, Y., Chua, T.-S., Ma, S.: Catch the black sheep: unified framework for shilling attack detection based on fraudulent action propagation. In: Twenty-fourth International Joint Conference on Artificial Intelligence. (2015)

  39. Zhang, F., Zhang, Z., Zhang, P., Wang, S.: UD-HMM: an unsupervised method for shilling attack detection based on hidden Markov model and hierarchical clustering. Knowl.-Based Syst. 148, 146–166 (2018)

    Article  Google Scholar 

  40. Zhang, Z., Kulkarni, S.R.: Detection of shilling attacks in recommender systems via spectral clustering. In: FUSION, pp. 1–8. IEEE (2014)

  41. Cao, J., Wu, Z., Mao, B., Zhang, Y.: Shilling attack detection utilizing semi-supervised learning method for collaborative recommender system. WWW 16(5–6), 729–748 (2013)

    Google Scholar 

  42. Cheng, Z., Hurley, N.: Effective diverse and obfuscated attacks on model-based recommender systems. In: Proceedings of the Third ACM Conference on Recommender Systems, pp. 141–148. (2009)

  43. Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: ICML, pp. 274–283. PMLR (2018)

  44. Machado, G.R., Silva, E., Goldschmidt, R.R.: Adversarial machine learning in image classification: a survey toward the defender’s perspective. CSUR (1), 1–38 (2021)

  45. He, X., He, Z., Du, X., Chua, T.-S.: Adversarial personalized ranking for recommendation. In: SIGIR, pp. 355–364. (2018)

  46. Li, R., Wu, X., Wang, W.: Adversarial learning to compare: self-attentive prospective customer recommendation in location based social networks. In: WSDM, pp. 349–357. (2020)

  47. Park, D.H., Chang, Y.: Adversarial sampling and training for semi-supervised information retrieval. In: The World Wide Web Conference, pp. 1443–1453. (2019)

  48. Tang, J., Du, X., He, X., Yuan, F., Tian, Q., Chua, T.-S.: Adversarial training towards robust multimedia recommender system. IEEE Trans. Knowl. Data Eng. 32(5), 855–867 (2019)

    Article  Google Scholar 

  49. Yue, Z., Zeng, H., Kou, Z., Shang, L., Wang, D.: Defending substitution-based profile pollution attacks on sequential recommenders. In: Proceedings of the 16th ACM Conference on Recommender Systems, pp. 59–70. (2022)

  50. Hidano, S., Kiyomoto, S.: Recommender systems robust to data poisoning using trim learning. In: ICISSP, pp. 721–724. (2020)

  51. Zhang, F., Lu, Y., Chen, J., Liu, S., Ling, Z.: Robust collaborative filtering based on non-negative matrix factorization and R1-norm. Knowl.-Based Syst. 118, 177–190 (2017)

    Article  Google Scholar 

  52. Yu, H., Gao, R., Wang, K., Zhang, F.: A novel robust recommendation method based on kernel matrix factorization. J. Intell. Fuzzy Syst. 32(3), 2101–2109 (2017)

    Article  Google Scholar 

  53. Smith, B., Linden, G.: Two decades of recommender systems at Amazon.com. IEEE Internet Comput. 21(3), 12–18 (2017)

    Article  Google Scholar 

  54. Gomez-Uribe, C.A., Hunt, N.: The Netflix recommender system: algorithms, business value, and innovation. ACM Trans. Manag. Inf. Syst. (TMIS) 6(4), 1–19 (2015)

    Google Scholar 

  55. Wu, Z., Wang, Y., Cao, J.: A survey on shilling attack models and detection techniques for recommender systems. Chinese Sci. Bull. 59(7), 551–560 (2014)

    Article  Google Scholar 

  56. Zhang, J., Xu, X., Han, B., Niu, G., Cui, L., Sugiyama, M., Kankanhalli, M.: Attacks which do not kill training make adversarial learning stronger. In: ICML, pp. 11278–11287. PMLR (2020)

  57. Yuan, F., Yao, L., Benatallah, B.: Adversarial collaborative neural network for robust recommendation. In: SIGIR, pp. 1065–1068. (2019)

  58. Raghunathan, A., Xie, S.M., Yang, F., Duchi, J.C., Liang, P.: Adversarial training can hurt generalization. http://arxiv.org/abs/1906.06032 (2019)

Download references

Acknowledgements

The work was supported by grants from the National Natural Science Foundation of China (No. 62022077).

Funding

The work was supported by grants from the National Natural Science Foundation of China (No. 62022077).

Author information

Authors and Affiliations

Authors

Contributions

Qingyang Wang and Chenwang Wu contribute equally to this paper, including algorithm implementation, experimental data collation, and paper writing. Defu Lian and Enhong Chen proofread the manuscript. In addition, all authors reviewed the manuscript.

Corresponding author

Correspondence to Defu Lian.

Ethics declarations

Ethical approval

Not applicable. I declare that this paper does not involve any human or animal studies, so no ethical issues are involved.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Q., Wu, C., Lian, D. et al. Securing recommender system via cooperative training. World Wide Web 26, 3915–3943 (2023). https://doi.org/10.1007/s11280-023-01214-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11280-023-01214-7

Keywords

Navigation