Skip to main content

Secure Split Learning Against Property Inference, Data Reconstruction, and Feature Space Hijacking Attacks

  • Conference paper
  • First Online:
Computer Security – ESORICS 2023 (ESORICS 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14347))

Included in the following conference series:

Abstract

Split learning of deep neural networks (SplitNN) has provided a promising solution to learning jointly for the mutual interest of a guest and a host, which may come from different backgrounds, holding features partitioned vertically. However, SplitNN creates a new attack surface for the adversarial participant. By investigating the adversarial effects of highly threatening attacks, including property inference, data reconstruction, and feature hijacking attacks, we identify the underlying vulnerability of SplitNN. To protect SplitNN, we design a privacy-preserving tunnel for information exchange. The intuition is to perturb the propagation of knowledge in each direction with a controllable unified solution. To this end, we propose a new activation function named R3eLU, transferring private smashed data and partial loss into randomized responses. We give the first attempt to secure split learning against three threatening attacks and present a fine-grained privacy budget allocation scheme. The analysis proves that our privacy-preserving SplitNN solution provides a tight privacy budget, while the experimental results show that our solution performs better than existing solutions in most cases and achieves a good tradeoff between defense and model usability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    More details can be found in another version, http://arxiv.org/abs/2304.09515.

References

  1. Abadi, M., et al.: Deep learning with differential privacy. In: ACM SIGSAC CCS (2016)

    Google Scholar 

  2. Ceballos, I., et al.: Splitnn-driven vertical partitioning. preprint arXiv:2008.04137 (2020)

  3. Du, J., Li, S., Chen, X., Chen, S., Hong, M.: Dynamic differential-privacy preserving sgd. preprint arXiv:2111.00173 (2021)

  4. Dwork, C., Roth, A., et al.: The algorithmic foundations of differential privacy. Found. Trends® Theor. Comput. Sci. 9(3–4), 211–407 (2014)

    Google Scholar 

  5. Erdogan, E., Kupcu, A., Cicek, A.E.: Unsplit: data-oblivious model inversion, model stealing, and label inference attacks against split learning. In: Proceedings of the 21st Workshop on Privacy in the Electronic Society, WPES 2022 (2022)

    Google Scholar 

  6. Erdogan, E., Teksen, U., Celiktenyildiz, M.S., Kupcu, A., Cicek, A.E.: Defense mechanisms against training-hijacking attacks in split learning. arXiv preprint arXiv:2302.0861 (2023)

  7. Fang, M., Gong, N.Z., Liu, J.: Influence function based data poisoning attacks to top-n recommender systems. In: WWW 2020 (2020)

    Google Scholar 

  8. Fu, C., et al.: Label inference attacks against vertical federated learning. In: USENIX Security 2022 (2022)

    Google Scholar 

  9. Ganju, K., Wang, Q., Yang, W., Gunter, C.A., Borisov, N.: Property inference attacks on fully connected neural networks using permutation invariant representations. In: ACM SIGSAC CCS (2018)

    Google Scholar 

  10. Gao, H., Cai, L., Ji, S.: Adaptive convolutional relus. In: AAAI Conference on Artificial Intelligence (2020)

    Google Scholar 

  11. Gao, Y., et al.: End-to-end evaluation of federated learning and split learning for internet of things. In: SRDS (2020)

    Google Scholar 

  12. Gawron, G., Stubbings, P.: Feature space hijacking attacks against differentially private split learning. arXiv preprint arXiv:2201.04018 (2022)

  13. Goodfellow, I., et al.: Generative adversarial nets. In: NIPS (2014)

    Google Scholar 

  14. Gupta, O., Raskar, R.: Distributed learning of deep neural network over multiple agents. J. Netw. Comput. Appl. 116, 1–8 (2018)

    Article  Google Scholar 

  15. Harper, A.F.M., Konstan, J.A.: The movielens datasets: history and context. ACM Trans. Interact. Intell. Syst. (2015)

    Google Scholar 

  16. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR (2015)

    Google Scholar 

  17. Hitaj, B., Ateniese, G., Perez-Cruz, F.: Deep models under the gan: information leakage from collaborative deep learning. In: ACM SIGSAC CCS (2017)

    Google Scholar 

  18. Huang, H., Mu, J., Gong, N.Z., Li, Q., Liu, B., Xu, M.: Data poisoning attacks to deep learning based recommender systems. In: NDSS (2021)

    Google Scholar 

  19. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  20. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  21. Li, J., Rakin, A.S., Chen, X., He, Z., Fan, D., Chakrabarti, C.: Ressfl: a resistance transfer framework for defending model inversion attack in split federated learning. In: CVPR (2022)

    Google Scholar 

  22. Liu, R., Cao, Y., Chen, H., Guo, R., Yoshikawa, M.: Flame: differentially private federated learning in the shuffle model. In: Proceedings of the AAAI Conference on Artificial Intelligence (2021)

    Google Scholar 

  23. Luo, X., Wu, Y., Xiao, X., Ooi, B.C.: Feature inference attack on model predictions in vertical federated learning. In: ICDE (2021)

    Google Scholar 

  24. Mao, Y., Yuan, X., Zhao, X., Zhong, S.: Romoa: robust model aggregation for the resistance of federated learning to model poisoning attacks. In: ESORICS (2021)

    Google Scholar 

  25. Mao, Y., Zhu, B., Hong, W., Zhu, Z., Zhang, Y., Zhong, S.: Private deep neural network models publishing for machine learning as a service. In: IWQoS (2020)

    Google Scholar 

  26. McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics (2017)

    Google Scholar 

  27. Melis, L., Song, C., De Cristofaro, E., Shmatikov, V.: Exploiting unintended feature leakage in collaborative learning. In: IEEE S &P (2019)

    Google Scholar 

  28. Molchanov, P., Mallya, A., Tyree, S., Frosio, I., Kautz, J.: Importance estimation for neural network pruning. In: CVPR (2019)

    Google Scholar 

  29. Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: IEEE S &P (2019)

    Google Scholar 

  30. Nguyen, T.D., et al.: Flguard: secure and private federated learning. arXiv preprint arXiv:2101.02281 (2021)

  31. OpenMined: Syft (2021). https://github.com/OpenMined/PySyft

  32. Pasquini, D., Ateniese, G., Bernaschi, M.: Unleashing the tiger: inference attacks on split learning. In: ACM SIGSAC CCS (2021)

    Google Scholar 

  33. Pereteanu, G.L., Alansary, A., Passerat-Palmbach, J.: Split he: fast secure inference combining split learning and homomorphic encryption. arXiv preprint arXiv:2202.13351 (2022)

  34. Salem, A., Bhattacharya, A., Backes, M., Fritz, M., Zhang, Y.: Updates-leak: data set inference and reconstruction attacks in online learning. In: USENIX Security Symposium (2020)

    Google Scholar 

  35. Salem, A., Zhang, Y., Humbert, M., Fritz, M., Backes, M.: ML-leaks: model and data independent membership inference attacks and defenses on machine learning models. In: NDSS (2019)

    Google Scholar 

  36. Sun, L., Qian, J., Chen, X.: LDP-FL: practical private aggregation in federated learning with local differential privacy. In: IJCAI (2021)

    Google Scholar 

  37. Tolpegin, V., Truex, S., Gursoy, M.E., Liu, L.: Data poisoning attacks against federated learning systems. In: ESORICS (2020)

    Google Scholar 

  38. Warner, S.L.: Randomized response: a survey technique for eliminating evasive answer bias. J. Am. Stat. Assoc. 60(309), 63–69 (1965)

    Article  Google Scholar 

  39. Webank: Fate (2021). https://github.com/FederatedAI/FATE

  40. Yu, L., Liu, L., Pu, C., Gursoy, M.E., Truex, S.: Differentially private model publishing for deep learning. In: IEEE S &P (2019)

    Google Scholar 

  41. Zhang, C., Li, S., Xia, J., Wang, W., Yan, F., Liu, Y.: \(\{\)BatchCrypt\(\}\): efficient homomorphic encryption for \(\{\)Cross-Silo\(\}\) federated learning. In: 2020 USENIX Annual Technical Conference (USENIX ATC 2020) (2020)

    Google Scholar 

  42. Zheng, Y., Lai, S., Liu, Y., Yuan, X., Yi, X., Wang, C.: Aggregation service for federated learning: an efficient, secure, and more resilient realization. IEEE Trans. Dependable Secure Comput. 20(2), 988–1001 (2022)

    Article  Google Scholar 

  43. Ziegler, C.N., McNee, S.M., Konstan, J.A., Lausen, G.: Improving recommendation lists through topic diversification. In: WWW (2005)

    Google Scholar 

Download references

Acknowledgement

The authors would like to thank our shepherd Prof. Stjepan Picek and the anonymous reviewers for the time and effort they have kindly put into this paper. Our work has been improved by following the suggestions they have made. This work was supported in part by the Leading-edge Technology Program of Jiangsu-NSF under Grant BK20222001 and the National Natural Science Foundation of China under Grants NSFC-62272222, NSFC-61902176, NSFC-62272215.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sheng Zhong .

Editor information

Editors and Affiliations

A Appendix

A Appendix

1.1 A.1 Model Architecture

The neural networks we used for MovieLens, BookCrossing, MNIST and CIFAR100 datasets after a split are shown in Table 8. These networks are widely used in related studies. We apply ResNet18 [16] for CIFAR100. We split them according to the interpretation of SplitNN in previous studies [2, 32].

Table 8. Model architectures used for evaluation.

1.2 A.2 Supplement Results of Evaluation

To further investigate how our solution affects the learning process of SplitNN, we report learning results of a MovieLens recommendation model protecting the privacy of the guest and the host in Fig. 4 and 5, respectively. In each plot, we show trends of training accuracy and testing accuracy as the training epoch increases. When \(\epsilon =0.1\) for the guest or the host, model usability will be influenced seriously. Things get better when the privacy budget increases to 1 for either the guest or the host. We can conclude from the figures that our solution achieves satisfying model usability even with a small privacy budget for either side of SplitNN.

Fig. 4.
figure 4

SplitNN learning curve with the guest’s privacy protected by our solution.

Fig. 5.
figure 5

SplitNN learning curve with the host’s privacy protected by our solution.

Table 9. Top-10 hit ratio (%) of SplitNN using different cutlayers.

In Table 9, we give a benchmark of SplitNN using different cut layers with two public datasets, MovieLens [15] and BookCrossing [43]. We also give the top-10 hit ratio for the test in Table 9. We use min as our merging strategy. We combine one linear layer with one ReLU as one cut layer. We notice that there is little difference between different cut layers. However, considering the computational cost at the guest part, it is a good tradeoff between computational cost and model usability to select the first layer as the cut layer.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mao, Y., Xin, Z., Li, Z., Hong, J., Yang, Q., Zhong, S. (2024). Secure Split Learning Against Property Inference, Data Reconstruction, and Feature Space Hijacking Attacks. In: Tsudik, G., Conti, M., Liang, K., Smaragdakis, G. (eds) Computer Security – ESORICS 2023. ESORICS 2023. Lecture Notes in Computer Science, vol 14347. Springer, Cham. https://doi.org/10.1007/978-3-031-51482-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-51482-1_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-51481-4

  • Online ISBN: 978-3-031-51482-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics