Skip to main content
Log in

Protecting IP of deep neural networks with watermarking using logistic disorder generation trigger sets

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

As deep learning technology matures, it’s being widely deployed in fields like image classification and speech recognition. However, training a functional deep learning model requires vast computing power and a large training dataset, leading to the emergence of a new business model of selling pre-trained models. However, these models are highly susceptible to theft, which poses a threat to the interests of their creators. Moreover, the network topology and weight parameters are considered intellectual property. To address these challenges, a method that can tag trained models to claim ownership without affecting their performance is necessary. Therefore, we propose a novel neural network watermarking protocol. In this method, the trigger set is constructed differently from previous methods by using a key obtained from the authority to generate a scrambling sequence, followed by using the sequence to scramble the pixels and assign their original labels. Finally, the trigger set is put into the network training together with the original training set to complete the watermark embedding. Since Logistic chaos mapping is nonlinear, unpredictable, and sensitive to initial values, we use Logistic chaos mapping as the generation method of dislocation sequence. We involve a third-party copyright center in the embedding process to prevent forgery attacks. The third-party only needs to store the disruption key and timestamp for each owner, reducing their storage burden. Our experimental results demonstrate that the ResNet model exhibits a mere 0.05 percentage point decrease in accuracy when using fine-tuning for watermark embedding, and a mere 0.03 percentage point decrease when using the training-from-scratch method. On the other hand, when using the SENet model, embedding watermarks via fine-tuning resulted in a 1.35 percentage point decrease in classification accuracy, while embedding watermarks from training-from-scratch resulted in a 0.94 percentage point increase in classification accuracy. Furthermore, our model exhibited robustness against various attacks in the robustness experiments, including model fine-tuning, model compression, and watermark overlay.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Adi Y, Baum C, Cisse M, Pinkas B, Keshet J (2018) Turning your weakness into a strength: watermarking deep neural networks by backdooring. In: 27th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 18). pp 1615–1631

  2. Asikuzzaman M, Pickering MR (2017) An overview of digital video watermarking. IEEE Trans Circuits Syst Video Technol 28(9):2131–2153

    Article  Google Scholar 

  3. Dargan S, Kumar M, Ayyagari MR, Kumar G (2020) A survey of deep learning and its applications: a new paradigm to machine learning. Arch Comput Meth Eng 27(4):1071–1092

    Article  MathSciNet  Google Scholar 

  4. Darvish Rouhani B, Chen H, Koushanfar F (2019) Deepsigns: an end-to-end watermarking framework for ownership protection of deep neural networks. In: Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems. pp 485–497

  5. Deng L, Yu D (2014) Deep learning: methods and applications. Found Trends Signal Process 7(3–4):197–387

    Article  MathSciNet  Google Scholar 

  6. Guo J, Potkonjak M (2018) Watermarking deep neural networks for embedded systems. In: 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). IEEE, pp 1–8

  7. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp 770–778

  8. Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp 7132–7141

  9. Jia H, Choquette-Choo CA, Chandrasekaran V, Papernot N (2021) Entangled watermarks as a defense against model extraction. In: 30th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 21)

  10. Le Merrer E, Perez P, Trédan G (2020) Adversarial frontier stitching for remote neural network watermarking. Neural Comput Appl 32(13):9233–9244

    Article  Google Scholar 

  11. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444

    Article  Google Scholar 

  12. Li Z, Hu C, Zhang Y, Guo S (2019) How to prove your model belongs to you: A blind-watermark based framework to protect intellectual property of DNN. In: Proceedings of the 35th Annual Computer Security Applications Conference. pp 126–137

  13. Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco: Common objects in context. In: Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13. Springer, pp 740–755

  14. Liu Y, Tang S, Liu R, Zhang L, Ma Z (2018) Secure and robust digital image watermarking scheme using logistic and RSA encryption. Expert Syst Appl 97:95–105

    Article  Google Scholar 

  15. Li H, Wenger E, Shan S, Zhao BY, Zheng H (2019) Piracy resistant watermarks for deep neural networks. Preprint at http://arxiv.org/abs/1910.01226

  16. Maung Maung AP, Kiya H (2021) Piracy-resistant DNN watermarking by block-wise image transformation with secret key. In: Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security. pp 159–164

  17. May RM (2004) Simple mathematical models with very complicated dynamics. The Theory of Chaotic Attractors 85–93

  18. Namba R, Sakuma J (2019) Robust watermarking of neural network with exponential weighting. In: Proceedings of the 2019 ACM Asia Conference on Computer and Communications Security. pp 228–240

  19. Shafer DS (1995) Nonlinear dynamics and chaos: with applications to physics, biology, chemistry, and engineering (Steven H. Strogatz). SIAM Rev 37(2):280–281

    Article  Google Scholar 

  20. Szyller S, Atli BG, Marchal S, Asokan N (2021) Dawn: dynamic adversarial watermarking of neural networks. In: Proceedings of the 29th ACM International Conference on Multimedia. pp 4417–4425

  21. Uchida Y, Nagai Y, Sakazawa S, Satoh S (2017) Embedding watermarks into deep neural networks. In: Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval. pp 269–277

  22. Wang J, Wu H, Zhang X, Yao Y (2020) Watermarking in deep neural networks via error back-propagation. Electron Imag 2020(4):22–1

    Google Scholar 

  23. Wang T, Kerschbaum F (2019) Attacks on digital watermarks for deep neural networks. In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, pp 2622–2626

  24. Wang T, Kerschbaum F (2021) Riga: covert and robust white-box watermarking of deep neural networks. In: Proceedings of the Web Conference 2021. pp 993–1004

  25. Yang F, Mou J, Liu J, Ma C, Yan H (2020) Characteristic analysis of the fractional-order hyperchaotic complex system and its image encryption application. Signal Process 169:107373

    Article  Google Scholar 

  26. Zhang LY, Zheng Y, Weng J, Wang C, Shan Z, Ren K (2018) You can access but you cannot leak: defending against illegal content redistribution in encrypted cloud media center. IEEE Trans Dependable Secure Comput 17(6):1218–1231

    Article  Google Scholar 

  27. Zhang J, Gu Z, Jang J, Wu H, Stoecklin MP, Huang H, Molloy I (2018) Protecting intellectual property of deep neural networks with watermarking. In: Proceedings of the 2018 on Asia Conference on Computer and Communications Security. pp 159–172

  28. Zhong Q, Zhang LY, Zhang J, Gao L, Xiang Y (2020) Protecting IP of deep neural networks with watermarking: a new label helps. Advances in Knowledge Discovery and Data Mining 12085:462

    Article  Google Scholar 

Download references

Acknowledgements

It is an honor to be part of Dr. Shen’s team. I would also like to thank my partner for her great support in my work. The Basic Research partially supported the project (Grant No.2020B1515120089, No.2021A1515011171, No.202102080410, and No.202102080282).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuyuan Shen.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lin, H., Shen, S. & Lyu, H. Protecting IP of deep neural networks with watermarking using logistic disorder generation trigger sets. Multimed Tools Appl 83, 10735–10754 (2024). https://doi.org/10.1007/s11042-023-15980-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-15980-z

Keywords

Navigation