Abstract
Recent years have witnessed the tremendous success of diffusion models in data synthesis. However, when diffusion models are applied to sensitive data, they also give rise to severe privacy concerns. In this paper, we present a comprehensive study about membership inference attacks against diffusion models, which aims to infer whether a sample was used to train the model. Two attack methods are proposed, namely loss-based and likelihood-based attacks. Our attack methods are evaluated on several state-of-the-art diffusion models, over different datasets in relation to privacy-sensitive data. Extensive experimental evaluations reveal the relationship between membership leakages and generative mechanisms of diffusion models. Furthermore, we exhaustively investigate various factors which can affect membership inference. Finally, we evaluate the membership risks of diffusion models trained with differential privacy.
Our code is available at: https://github.com/HailongHuPri/MIDM.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abadi, M., et al.: Deep learning with differential privacy. In: ACM SIGSAC Conference on Computer and Communications Security (CCS), pp. 308–318. ACM (2016)
Carlini, N., Chien, S., Nasr, M., Song, S., Terzis, A., Tramer, F.: Membership inference attacks from first principles. In: IEEE Symposium on Security and Privacy (S &P), p. 1519. IEEE (2022)
Carlini, N., et al.: Extracting training data from diffusion models. arXiv preprint arXiv:2301.13188 (2023)
Carlini, N., Liu, C., Erlingsson, Ú., Kos, J., Song, D.: The secret sharer: evaluating and testing unintended memorization in neural networks. In: USENIX Security Symposium (USENIX Security), pp. 267–284. USENIX Association (2019)
Carlini, N., et al.: Extracting training data from large language models. In: USENIX Security Symposium (USENIX Security), pp. 2633–2650. USENIX Association (2021)
Chen, D., Yu, N., Zhang, Y., Fritz, M.: GAN-leaks: a taxonomy of membership inference attacks against generative models. In: ACM SIGSAC Conference on Computer and Communications Security (CCS), pp. 343–362. ACM (2020)
Choquette-Choo, C.A., Tramer, F., Carlini, N., Papernot, N.: Label-only membership inference attacks. In: International Conference on Machine Learning (ICML), pp. 1964–1974. PMLR (2021)
Dhariwal, P., Nichol, A.: Diffusion models beat GANs on image synthesis. In: Advances in Neural Information Processing Systems (NeurIPS), vol. 34, pp. 8780–8794. Curran Associates, Inc. (2021)
Duan, J., Kong, F., Wang, S., Shi, X., Xu, K.: Are diffusion models vulnerable to membership inference attacks? arXiv preprint arXiv:2302.01316 (2023)
Dwork, C.: Differential privacy: a survey of results. In: Agrawal, M., Du, D., Duan, Z., Li, A. (eds.) TAMC 2008. LNCS, vol. 4978, pp. 1–19. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-79228-4_1
Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 2672–2680. Curran Associates, Inc. (2014)
Grathwohl, W., Chen, R.T., Bettencourt, J., Sutskever, I., Duvenaud, D.: FFJORD: free-form continuous dynamics for scalable reversible generative models. In: International Conference on Learning Representations (ICLR) (2018)
Hayes, J., Melis, L., Danezis, G., De Cristofaro, E.: LOGAN: membership inference attacks against generative models. In: Proceedings on Privacy Enhancing Technologies, pp. 133–152. Sciendo (2019)
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 6626–6637. Curran Associates, Inc. (2017)
Hilprecht, B., Härterich, M., Bernau, D.: Monte Carlo and reconstruction membership inference attacks against generative models. In: Proceedings on Privacy Enhancing Technologies, pp. 232–249. Sciendo (2019)
Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Advances in Neural Information Processing Systems (NeurIPS), vol. 33, pp. 6840–6851. Curran Associates, Inc. (2020)
Hu, H., Pang, J.: Membership inference attacks against GANs by leveraging over-representation regions. In: ACM SIGSAC Conference on Computer and Communications Security (CCS), pp. 2387–2389. ACM (2021)
Kaggle.com: Diabetic retinopathy detection (2015). https://www.kaggle.com/c/diabetic-retinopathy-detection/
Karras, T., Aittala, M., Aila, T., Laine, S.: Elucidating the design space of diffusion-based generative models. In: Advances in Neural Information Processing Systems (NeurIPS). Curran Associates, Inc. (2022)
Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4401–4410. IEEE (2019)
Kazerouni, A., et al.: Diffusion models for medical image analysis: a comprehensive survey. arXiv preprint arXiv:2211.07804 (2022)
Leino, K., Fredrikson, M.: Stolen memories: leveraging model memorization for calibrated white-box membership inference. In: Proceedings of USENIX Security Symposium (USENIX Security), pp. 1605–1622. USENIX Association (2020)
Li, Z., Zhang, Y.: Membership leakage in label-only exposures. In: ACM SIGSAC Conference on Computer and Communications Security (CCS), pp. 880–895. ACM (2021)
Lin, Z., Jain, A., Wang, C., Fanti, G., Sekar, V.: Using GANs for sharing networked time series data: challenges, initial promise, and open questions. In: Proceedings of the ACM Internet Measurement Conference (IMC), pp. 464–483. ACM (2020)
Liu, Y., Zhao, Z., Backes, M., Zhang, Y.: Membership inference attacks by exploiting loss trajectory. In: ACM SIGSAC Conference on Computer and Communications Security (CCS), pp. 2085–2098 (2022)
Matsumoto, T., Miura, T., Yanai, N.: Membership inference attacks against diffusion models. arXiv preprint arXiv:2302.03262 (2023)
Murakonda, S.K., Shokri, R.: ML privacy meter: aiding regulatory compliance by quantifying the privacy risks of machine learning. arXiv preprint arXiv:2007.09339 (2020)
Park, N., Mohammadi, M., Gorde, K., Jajodia, S., Park, H., Kim, Y.: Data synthesis based on generative adversarial networks. Proc. VLDB Endow. 11(10), 1071–1083 (2018)
Parliament, E., of the European Union, C.: Art. 35 GDPR: Data protection impact assessment (2016). https://gdpr-info.eu/art-35-gdpr/
Pinaya, W.H.L., et al.: Brain imaging generation with latent diffusion models. In: Mukhopadhyay, A., Oksuz, I., Engelhardt, S., Zhu, D., Yuan, Y. (eds.) DGM4MICCAI 2022. LNCS, vol. 13609, pp. 117–126. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-18576-2_12
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10684–10695. IEEE (2022)
Salem, A., Zhang, Y., Humbert, M., Berrang, P., Fritz, M., Backes, M.: ML-leaks: model and data independent membership inference attacks and defenses on machine learning models. In: Network and Distributed Systems Security Symposium (NDSS). Internet Society (2019)
Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: IEEE Symposium on Security and Privacy (S &P), pp. 3–18. IEEE (2017)
Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., Ganguli, S.: Deep unsupervised learning using nonequilibrium thermodynamics. In: International Conference on Machine Learning (ICML), pp. 2256–2265. PMLR (2015)
Song, S., Marn, D.: Introducing a new privacy testing library in tensorflow (2020). https://blog.tensorflow.org/2020/06/introducing-new-privacy-testing-library.html
Song, Y.: Score-based generative modeling through stochastic differential equations (2021). https://github.com/yang-song/score_sde_pytorch
Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. In: Advances in Neural Information Processing Systems (NeurIPS), vol. 32. Curran Associates, Inc. (2019)
Song, Y., Sohl-Dickstein, J., Kingma, D.P., Kumar, A., Ermon, S., Poole, B.: Score-based generative modeling through stochastic differential equations. In: International Conference on Learning Representations (ICLR) (2021)
Wu, Y., Yu, N., Li, Z., Backes, M., Zhang, Y.: Membership inference attacks against text-to-image generation models. arXiv preprint arXiv:2210.00968 (2022)
Ye, J., Maddi, A., Murakonda, S.K., Shokri, R.: Enhanced membership inference attacks against machine learning models. In: ACM SIGSAC Conference on Computer and Communications Security (CCS), pp. 3093–3106 (2022)
Yousefpour, A., et al.: Opacus: user-friendly differential privacy library in PyTorch. arXiv preprint arXiv:2109.12298 (2021)
Zhu, D., Chen, D., Grossklags, J., Fritz, M.: Data forensics in diffusion models: a systematic analysis of membership privacy. arXiv preprint arXiv:2302.07801 (2023)
Acknowledgments
This research was funded in whole by the Luxembourg National Research Fund (FNR), grant reference 13550291.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
In this section, we show additional results and introduce each result in its caption.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Hu, H., Pang, J. (2023). Loss and Likelihood Based Membership Inference of Diffusion Models. In: Athanasopoulos, E., Mennink, B. (eds) Information Security. ISC 2023. Lecture Notes in Computer Science, vol 14411. Springer, Cham. https://doi.org/10.1007/978-3-031-49187-0_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-49187-0_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-49186-3
Online ISBN: 978-3-031-49187-0
eBook Packages: Computer ScienceComputer Science (R0)