Skip to main content

A Member Inference Attack Defense Method Based on Differential Privacy and Data Enhancement

  • Conference paper
  • First Online:
Applied Intelligence (ICAI 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 2015))

Included in the following conference series:

Abstract

The development of deep learning has brought about the business model of Machine Learning as a Service (MLaaS). Malicious users can infer whether a member has participated in model training through Membership Inference Attacks (MIA), thereby stealing user privacy. Although various methods have been proposed to defend against membership inference attacks, they are all aimed at defending against a certain type of membership inference attack and cannot defend against various membership inference attacks at the same time. This paper proposes the MEWDP method, which can defend against multiple types of membership inference attacks. Firstly, it uses multi-round MIXUP data augmentation method to process privacy data and adds non-interference noise to the data in the form of data fusion. Then, during the model training stage, Gaussian noise that satisfies differential privacy is added to protect model privacy, and label smoothing method is used to prevent the training model from overfitting. The results show that this defense method can reduce the success rate of metric-based membership inference attacks to 51.2%, and reduce the success rate of model-based membership inference attacks to 50.9%. Compared with other defense methods, the MEWDP defense method has universality and better defense effect. For the CIFAR10 dataset, it can reduce the success rate of member attacks to 50.8%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Brown, T., Mann, B., Ryder, N., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)

    Google Scholar 

  2. Tramèr, F., Zhang, F., Juels, A., et al.: Stealing machine learning models via prediction {APIs}. In: 25th USENIX Security Symposium (USENIX Security 2016), pp. 601–618 (2016)

    Google Scholar 

  3. Shen, S., Tople, S., Saxena, P.: Auror: defending against poisoning attacks in collaborative deep learning systems. In: Proceedings of the 32nd Annual Conference on Computer Security Applications, pp. 508–519 (2016)

    Google Scholar 

  4. Barreno, M., Nelson, B., Sears, R., et al.: Can machine learning be secure? Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security, pp. 16–25 (2006)

    Google Scholar 

  5. Gu, T., Dolan-Gavitt, B., Garg, S.: BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain (2017). arXiv preprint arXiv:1708.06733

  6. Shokri, R., Stronati, M., Song, C., et al.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2017)

    Google Scholar 

  7. Zhao, L., Wang, Q., Wang, C., et al.: Veriml: enabling integrity assurances and fair payments for machine learning as a service. IEEE Trans. Parallel Distrib. Syst. 32(10), 2524–2540 (2021)

    Article  Google Scholar 

  8. Li, Z., Zhang, Y.: Membership leakage in label-only exposures. In: Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pp. 880–895 (2021)

    Google Scholar 

  9. Hui, B., Yang, Y., Yuan, H., et al.: Practical blind membership inference attack via differential comparisons. arXiv preprint arXiv:2101.01341 (2021)

  10. Park, Y., Kang, M.: Membership inference attacks against object detection models. arXiv preprint arXiv:2001.04011 (2020)

  11. Song, L., Mittal, P.: Systematic evaluation of privacy risks of machine learning models. In: 30th USENIX Security Symposium (USENIX Security 2021), pp. 2615–2632 (2021)

    Google Scholar 

  12. Jia, J., Salem, A., Backes, M., et al.: Memguard: defending against black-box membership inference attacks via adversarial examples. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 259–274 (2019)

    Google Scholar 

  13. Nasr, M., Shokri, R., Houmansadr, A.: Machine learning with membership privacy using adversarial regularization. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 634–646 (2018)

    Google Scholar 

  14. Dwork, C.: Differential privacy: a survey of results. In: Agrawal, M., Du, D., Duan, Z., Li, A. (eds.) TAMC 2008. LNCS, vol. 4978, pp. 1–19. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-79228-4_1

    Chapter  Google Scholar 

  15. Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found. Trends® Theor. Comput. Sci. 9(3–4), 211–407 (2014)

    Google Scholar 

  16. Han, R., Li, D., Ouyang, J., et al.: Accurate differentially private deep learning on the edge. IEEE Trans. Parallel Distrib. Syst. 32(9), 2231–2247 (2021)

    Article  Google Scholar 

  17. Owusu-Agyemeng, K., Qin, Z., Xiong, H., et al.: MSDP: multi-scheme privacy-preserving deep learning via differential privacy. Pers. Ubiquitous Comput. 1–13 (2021)

    Google Scholar 

  18. Rahman, M.A., Rahman, T., Laganière, R., et al.: Membership inference attack against differentially private deep learning model. Trans. Data Priv. 11(1), 61–79 (2018)

    Google Scholar 

  19. Hagestedt, I., Zhang, Y., Humbert, M., et al.: MBeacon: privacy-preserving beacons for DNA methylation data (2019)

    Google Scholar 

  20. Chen, J., Wang, W.H., Shi, X.: Differential privacy protection against membership inference attack on machine learning for genomic data. In: BIOCOMPUTING 2021: Proceedings of the Pacific Symposium, pp. 26–37 (2020)

    Google Scholar 

  21. Chen, Z., Li, H., Hao, M., Xu, G.: Enhanced mixup training: a defense method against membership inference attack. In: Deng, R., Bao, F., Wang, G., Shen, J., Ryan, M., Meng, W., Wang, D. (eds.) ISPEC 2021. LNCS, vol. 13107, pp. 32–45. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-93206-0_3

    Chapter  Google Scholar 

  22. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lina Ge .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cui, G., Ge, L., Zhao, Y., Fang, T. (2024). A Member Inference Attack Defense Method Based on Differential Privacy and Data Enhancement. In: Huang, DS., Premaratne, P., Yuan, C. (eds) Applied Intelligence. ICAI 2023. Communications in Computer and Information Science, vol 2015. Springer, Singapore. https://doi.org/10.1007/978-981-97-0827-7_23

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-0827-7_23

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-0826-0

  • Online ISBN: 978-981-97-0827-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics