Examining StyleGAN as a Utility-Preserving Face De-identification Method

Authors: Seyyed Mohammad Sadegh Moosavi Khorzooghi (The University of Texas at Arlington), Shirin Nilizadeh (The University of Texas at Arlington)

Volume: 2023
Issue: 4
Pages: 341–358
DOI: https://doi.org/10.56553/popets-2023-0114

Download PDF

Abstract: Several face de-identification methods have been proposed to preserve users’ privacy by obscuring their faces. These methods, however, can degrade the quality of photos, and they usually do not preserve the utility of faces, i.e., their age, gender, pose, and facial expression. Recently, advanced generative adversarial network models, such as StyleGAN [ 33], have been proposed, which generate realistic, high-quality imaginary faces. In this paper, we investigate the use of StyleGAN in generating de-identified faces through style mixing, where the styles or features of the target face and an auxiliary face get mixed to generate a de-identified face that carries the utilities of the target face. We examined this de-identification method for preserving utility and privacy by implementing several face detection, verification, and identification attacks and conducting a user study. The results from our extensive experiments, human evaluation, and comparison with two state-of-the-art face de-identification methods, i.e., CIAGAN and DeepPrivacy, show that StyleGAN performs on par or better than these methods, preserving users’ privacy and images’ utility. In particular, the results of the machine learning-based experiments show that StyleGAN0-4 preserves utility better than CIAGAN and DeepPrivacy while preserving privacy at the same level. StyleGAN 0-3 preserves utility at the same level while providing more privacy. In this paper, for the first time, we also performed a carefully designed user study to examine both privacy and utility-preserving properties of StyleGAN 0-3, 0-4, and 0-5, as well as CIAGAN and DeepPrivacy from the human observers’ perspectives. Our statistical tests showed that participants tend to verify and identify StyleGAN 0-5 images easier than DeepPrivacy images. All the methods but StyleGAN 0-5 had significantly lower identification rates than CIAGAN. Regarding utility, as expected, StyleGAN 0-5 performed significantly better in preserving some attributes. Among all methods, on average, participants believe gender has been preserved the most while naturalness has been preserved the least.

Keywords: face de-identification, face obfuscation, privacy, utility, StyleGAN

Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution 4.0 license.