Next Article in Journal
Steady-State Thermodynamics of a Cascaded Collision Model
Next Article in Special Issue
Correction: Günlü, O. Function Computation under Privacy, Secrecy, Distortion, and Communication Constraints. Entropy 2022, 24, 110
Previous Article in Journal
A Multi-Criteria Decision Support and Application to the Evaluation of the Fourth Wave of COVID-19 Pandemic
Previous Article in Special Issue
Function Computation under Privacy, Secrecy, Distortion, and Communication Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Privacy-Preserving Image Template Sharing Using Contrastive Learning

by
Shideh Rezaeifar
*,
Slava Voloshynovskiy
,
Meisam Asgari Jirhandeh
and
Vitality Kinakh
Department of Computer Science, University of Geneva, 1227 Carouge, Switzerland
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(5), 643; https://doi.org/10.3390/e24050643
Submission received: 4 January 2022 / Revised: 8 April 2022 / Accepted: 20 April 2022 / Published: 3 May 2022
(This article belongs to the Special Issue Information-Theoretic Approach to Privacy and Security)

Abstract

:
With the recent developments of Machine Learning as a Service (MLaaS), various privacy concerns have been raised. Having access to the user’s data, an adversary can design attacks with different objectives, namely, reconstruction or attribute inference attacks. In this paper, we propose two different training frameworks for an image classification task while preserving user data privacy against the two aforementioned attacks. In both frameworks, an encoder is trained with contrastive loss, providing a superior utility-privacy trade-off. In the reconstruction attack scenario, a supervised contrastive loss was employed to provide maximal discrimination for the targeted classification task. The encoded features are further perturbed using the obfuscator module to remove all redundant information. Moreover, the obfuscator module is jointly trained with a classifier to minimize the correlation between private feature representation and original data while retaining the model utility for the classification. For the attribute inference attack, we aim to provide a representation of data that is independent of the sensitive attribute. Therefore, the encoder is trained with supervised and private contrastive loss. Furthermore, an obfuscator module is trained in an adversarial manner to preserve the privacy of sensitive attributes while maintaining the classification performance on the target attribute. The reported results on the CelebA dataset validate the effectiveness of the proposed frameworks.

1. Introduction

Deep learning has been widely applied in many computer vision applications in recent years, with remarkable success. Much progress in deep learning has been made possible thanks to accessible computational power and the widely available datasets needed for training. The necessity of memory and computational power has incentivized many companies such as AMAZON, Google, and IBM to provide their customers with platforms offering Machine Learning as a Service (MLaaS). MLaaS runs on a cloud environment and covers most infrastructure issues such as data pre-processing, model training, and model evaluation. Hence, the users can deploy their machine learning models by simply uploading their data (e.g., images) into the cloud server.
With all the promises made by MLaaS, this scheme introduces various privacy challenges for both users and the service provider. From one point of view, the service providers are concerned that an adversary could be disguised as a client to steal their model parameters. On the other hand, users are worried that sensitive information might be revealed to unauthorized third parties by uploading their raw data into the cloud server [1]. Furthermore, in some financial or medical data applications, it might not be legally allowed for the user to upload and submit raw data to the cloud server. One widely used solution is to share a feature representation of data instead. However, the adversary can still exploit the privacy leakage in the feature representation and design attacks targeting various objectives.
There are mainly two types of attacks regarding the privacy of users’ data: attribute inference attack and reconstruction attack [1,2]. In the reconstruction or model inversion attack, the adversary’s goal is to reconstruct the original data given the shared feature representation. Whereas in attribute inference attack, the adversary is interested in identifying certain sensitive attributes in the data such as age, gender, race, etc.
In this paper, we consider an image classification task in which users send their original data to the cloud service provider. The adversary, a malicious user or the MLaaS provider, wishes to exploit the privacy leakage in the shared feature representation targeting reconstruction or attribute inference attack.
The rest of the paper is organized as follows: Problem formulation and assumptions are introduced in Section 2. Section 3 reviews the related work. Two defense frameworks against the reconstruction attack and attribute inference attack are proposed in Section 4 and Section 5, respectively. Finally, Section 6 concludes this work along with suggestions for future work.

2. Problem Formulation

As shown in Figure 1, given the high dimensional images in the dataset x R n , users or data owners intend to share a feature representation h for the specific utility task, image classification. Let Y t denote the corresponding labels for the target class that the central classifier is trained to predict them and let Y p denote the label information for the private and sensitive attribute. Concerned about the privacy leakage in the shared representations, the users, as the defenders, apply an obfuscation mechanism on the shared features before releasing them to the public as h p . The defender’s ultimate goal is to maintain a good classification performance while preserving their privacy.
On the other hand, having access to a collection of original images and their corresponding protected features D = { ( x 1 , h p 1 ) , ( x 2 , h p 2 ) , x N , h p N ) } , the adversary aims to reconstruct the original data or recognize sensitive attributes such as age, gender, etc. Therefore, in this setting, the utility is a classification task and privacy is defined as the attacker’s ability to reconstruct the original data or re-identify the sensitive attributes.

3. Related Work

Several techniques have been introduced to preserve the users’ data privacy, such as image obfuscation, homomorphic encryption, secure multi-party computation, and private feature representation.
Classical image obfuscation: In image obfuscation techniques, the original image is perturbed to hide sensitive information or details and make it visually unidentifiable. Conventional methods include pixelating [3], blurring [3,4], and masking [5]. However, as discussed in [6,7], these protected images can still be identified or reconstructed using deep learning-assisted methods. Recently, more advanced frameworks of deep obfuscation based on deep generative models have been introduced [8,9,10].
Homomorphic encryption: Homomorphic encryption (HE) is another method that allows one to carry out computations on encrypted data without the need for decryption [11]. This means that data can be processed securely even though they have been outsourced in untrusted and public environments. HE can be categorized into three types, namely partially homomorphic (PHE), somewhat homomorphic (SWHE), and fully homomorphic encryption (FHE) [11]. However, the operations in HE are limited to be represented as a polynomial of a bounded degree. They cannot, therefore, be used with complicated and nonlinear computation functions. Moreover, HE is highly computationally intensive and leads to an extremely slow training process.
Deep and private feature sharing: With the recent advancements of deep models, a new line of work has been introduced to share deep private and obfuscated feature representations of images. Osia et al. [12] considered a client-server setting in which the deep model architecture is separated into two parts: a feature extractor on the client’s side and a classifier on the cloud. The extracted features are then protected against attribute inference attacks by adding noise and Siamese fine-tuning. However, their proposed framework is not feasible during training due to its interactive training procedure and high communication throughput between the clients and servers [13].
Later, Li et al. proposed PrivyNet, a private deep learning training framework [13]. PrivyNet splits a neural network into local and cloud counterparts. The feature representations of private data are extracted using the local model while the cloud neural network is trained on publicly released features for the target classification task. The authors considered a reconstruction attack on the shared features and measured privacy through the reconstruction error. In ref. [14], the authors used an adversarial training scheme between an encoder and a classifier to preserve the privacy of intermediate encoded features from attribute inference attacks.
Along the same line of research, Lie et al. [15] introduced an adversarial privacy network called PAN to learn obfuscated features. The learned that obfuscated features are designed to be effective against both reconstruction attacks and attribute inference attacks. Similarly, DeepObfuscater was introduced in ref. [16], and the authors extended PAN to include perceptual quality.
In the context of privacy of published datasets, Huang et al. [17] proposed a framework based on a minmax game between a privatizer and an adversary. By employing generative adversarial networks (GAN) in their framework, users can directly learn to privatize their dataset without having access to the dataset statistics.

4. Defense against a Reconstruction Attack

This section introduces a framework to maintain a good classification accuracy while avoiding the invertibility of shared representations. In other words, the proposed framework is designed to keep only relevant information for the specific classification task. The model consists of three modules: encoder, obfuscator, and classifier. The encoder is trained using supervised contrastive loss to provide maximal discrimination for the classification task. The encoded features are obfuscated by minimizing their statistical correlation to the original input images. Finally, a classifier is jointly trained to maintain the classification performance.

4.1. Proposed Architecture

The overall private data-sharing framework, shown in Figure 2, consists of three steps:
  • An encoder  f ϕ is pre-trained on the public data using supervised contrastive loss. The encoder is later used to extract discriminative representation for the targeted classification task;
  • An obfuscator  f ψ is learned to remove irrelevant information in representation h by minimizing its correlation to the original data x ;
  • A classifier  g θ is jointly trained with the obfuscator to ensure that the useful information for the intended classification task is preserved in the obfuscated representation.

4.1.1. Encoder

As shown in Figure 3, the encoder f ϕ is initially trained with a contrastive loss to output a well-discriminated feature representation. To this end, we used a ResNet backbone with contrastive loss similar to the SimCLR approach [18].
The basic idea behind contrastive learning is to pull similar instances denoted as positive pairs together and push dissimilar ones, negative samples, apart. Given a random augmentation transform T t ( . ) , two different views x i , x j of the same image x are considered as positive pairs, and the rest of the batch samples as negative pairs. A projection head g θ ( . ) maps the feature representations of the base encoder to the latent embedding z [18]:
x i = T t i ( x ) , h i = f ϕ ( x i ) , z i = g θ ( h i ) ; x j = T t j ( x ) , h j = f ϕ ( x j ) , z j = g θ ( h j ) .
Using cosine similarity, the similarities between positive pairs are maximized while the negative ones are minimized. The self-supervised contrastive loss is defined as:
L s s l = i log exp ( sim ( z i , z j ) ) k , k i exp ( sim ( z i , z k ) ) .
This idea was further extended to include target class information in the loss where feature representations from the same class are pulled closer together than those from different classes [19].
L s u p c o n = i 1 | P ( i ) | p P ( i ) log exp ( sim ( z i , z p ) ) k , k i exp ( sim ( z i , z k ) ) ,
where P ( i ) are all the positive samples belonging to the same class as x i .

4.1.2. Obfuscator

The obfuscator f ψ is trained to avoid the invertibility of shared feature representation. From an information-theoretic point of view, X H X ^ forms a Markov chain. To mitigate the reconstruction attack, I ( X , X ^ ) should be minimized. A widely used approach is to jointly train an adversary image decoder to achieve reconstruction disparity by minimizing the Structural Similarity Index Measure (SSIM) [20]. This is done through a min-max optimization game between the obfuscator and adversary decoder.
Nevertheless, considering the information processing inequality based on the above Markov chain, minimizing the mutual information between the original image X and the feature representation H upper bounds the I ( X , X ^ ) as I ( X , H ) I ( X , X ^ ) .
To minimize I ( X , H ) , one should estimate the mutual information, which is a well-known and challenging problem and would involve a more complicated optimization. To solve this issue and to accelerate and simplify the training, we adopted two statistical correlation measures between random variables, namely, Hilbert–Schmidt Independence Criterion (HSIC) [21,22] and Distance Correlation (DistCorr) [23]. Consequently, the obfuscator network f ψ is trained to minimize the correlation between the original images and the protected representation:
L Corr = Corr ( x , h p ) ,
where Corr ( . ) can be either based on distance correlation DistCorr or Hilbert–Schmidt Independence Criterion HSIC. The idea of minimizing the statistical dependencies of features has been around in the literature of federated or distributed learning and physics [24,25,26].
Hilbert–Schmidt Independence Criterion (HSIC): Let F be a reproducing kernel Hilbert space (RKHS), with the continuous feature mapping ϕ ( x ) and kernel function k ( x , x ) = ϕ ( x ) , ϕ ( x ) . Similarly, assume G be an RKHS, with the continuous feature mapping ψ ( h ) and kernel function k ( h , h ) = ψ ( h ) , ψ ( h ) .
The cross-covariance operator C x h : G F can be defined as [21,22]:
C x h : = E p ( x , h ) [ ( ϕ ( x ) μ x ) ( ψ ( h ) μ h ] ,
where ⊗ is the matrix product and μ x = E p ( x ) [ ϕ ( x ) ] , μ h = E p ( h ) [ ψ ( h ) ] . The largest singular value of the cross-covariance operator C xh is zero if and only if x and h are independent
The Hilbert–Schmidt Independence Criterion is defined as the squared Hilbert–Schmidt norm of the associated cross-covariance operator C xh :
HSIC x , h ( F , G ) = C xh H S 2 .
Distance Correlation (DistCorr): Let X and H be two random vectors with finite second moments. Assume that ( X , H ) , ( X , H ) , ( X , H ) are independent and identically distributed. Then, the distance covariance can be defined as:
dCov ( X , H ) = E ( | X X | | H H | ) + E ( | X X | ) E ( | H H | ) 2 E ( | X X | | H H | ) ,
where | . | is the pairwise distance. Subsequently, the definition of the distance correlation will be:
DistCorr ( X , H ) = dCov ( X , H ) dCov ( X , X ) dCov ( H , H ) .

4.1.3. Classifier

The classifier g θ is a lightweight neural network with two fully connected layers and Relu activation functions. The classifier is jointly trained with the obfuscator to maintain the classification accuracy for the utility task:
( θ ^ , ψ ^ ) = argmin θ , ψ L C E ( y t , y ^ t ) + γ L Corr ( x , h p ) ,
where γ is the utility-privacy trade-off parameter. L C E denotes the cross-entropy between the utility attribute y t and its estimate y ^ t and L Corr denotes either DistCorr or HSIC according to Equations (6) and (8).

4.2. Experimental Results

4.2.1. Experimental Setup

Dataset: We conducted experiments on a celebrity face image dataset, CelebA [27], which consists of over 20,000 celebrity images, where each image is annotated with 40 attributes. Every input image is center-cropped by 178 × 178 and then resized to 128 × 128 . We select the “gender” attribute for our intended classification task.
Attacker setup: The adversary has a set of publicly available protected representations h p with the corresponding original images x and aims to train a decoder to reconstruct the original image for the model inversion attack.

4.2.2. Visualizations of Encoded Features

This section investigates the effect of using supervised contrastive loss in the encoded features. To do so, we visualized the 2D t-SNEs of extracted features for the target class label of “gender,” as depicted in Figure 4. As expected, the output features of the encoder trained with supervised loss are more discriminative compared to those trained in the unsupervised way.

4.2.3. Classification Performance

In this section, the utility-privacy trade-off is investigated in the form of classification accuracy vs. decorrelation. More specifically, we are interested in analyzing the extent to which classification accuracy decreases if we decorrelate the features from original images. As shown in Table 1, with only 0.2 loss in the accuracy, the correlation between input images and the features drops for both similarity measures. In the case of HSIC, however, the reduction in correlation is remarkable. The considerably smaller loss in the accuracy is mainly due to the supervised contrastive loss used in training the encoder, as we obtain discriminative features with respect to the target class. In Section 4.2.4, we demonstrate that an attacker can still reconstruct completely recognizable images using these discriminative features. Consequently, the obfuscator aims at removing all the redundant information about the images and only keeping the ones related to the intended classification task.

4.2.4. Reconstruction Attack

According to Figure 5, the adversary model for the reconstruction attack consists of a generator G θ x and a discriminator D θ x x ^ . The generator network maps the protected and obfuscated feature representation h p to the image space, while the discriminator evaluates them. The discriminator network assigns a probability that the image is from the real data distribution rather than the generator distribution. Thus, the discriminator is trained to classify images as being from the training data or reconstructed from the generator:
L D = log ( D θ x x ^ ( x ) ) + log ( 1 D θ x x ^ ( G θ x ( h p ) ) .
Therefore, the decoder and generator are trained in a min-max optimization problem:
min g x max θ x x ^ E p ( x ) [ log ( D θ x x ^ ( x ) ) ] + E p ( h p ) [ log ( 1 D θ x x ^ ( G θ x ( h p ) ) ] .
To improve the performance of the generator, a perceptual loss similar to SRGAN [28] was also employed. The perceptual loss for the generator network consists of an adversarial loss and a content loss:
L perceptual = L mse + L vgg content loss + L D g adversarial loss ,
and:
L mse = E p ( x , h p ) x G θ x ( h p ) , L vgg = E p ( x , h p ) vgg 19 ( x ) vgg 19 ( G θ x ( h p ) ) , L D g = E p ( h p ) [ log ( D θ x x ^ ( G θ x ( h p ) ) ) ] ,
where vgg 19 ( . ) is the output of a pre-trained 19-layer VGG network [29].
We conducted experiments on the reconstruction attack for different correlation losses and different values of γ in Equation (9). The performance of the attack model is evaluated using multi-scale structural similarity (MSSIM) [30] and SSIM [20]. To better evaluate the effectiveness of the proposed obfuscation model, the reconstruction quality from the following scenarios has been considered:
  • h : The feature representations of original images;
  • h x noisy : The raw images are perturbed by adding Gaussian noise and fed to the encoder to get the features;
  • h noisy : The feature representations of original images are perturbed by adding Gaussian noise;
  • h p : The obfuscated and protected features.
The average SSIM and MSSIM for reconstructed images from the protected features and three other scenarios are reported in Table 2. As the SSIM and MSSIM scores were very close for both correlation measures and different values of γ , we only reported the one for DistCorr and γ = 2 in Table 2. The results show that both similarity measures are dropped by a large margin with only a 0.2% loss in accuracy, therefore validating the effectiveness of the obfuscator.
Moreover, the visualization of the reconstructed images is illustrated in Figure 6. The reconstructed images from the raw features are completely recognizable, but not very sharp. This is mainly because the encoder is trained with the supervised contrastive loss, where the information about the target class is mostly left in the last layer. On the other hand, the output images become completely unrecognizable with our proposed obfuscator, and even a powerful decoder can only output an average image. To further investigate the effect of correlation measure and γ in Equation (9), the output images for different cases are presented in Figure 7. Even though the attacker outputs an average image for both cases of correlation measures, it is interesting to note that features learned by HSIC produce different average images for males and females. In other words, the gender information is clearly preserved in the protected representation.

5. Defense against an Attribute Inference Attack

Herein, our primary focus is to design a framework for defense against attribute inference attacks. The defender attempts to share a representation with relevant information about the target class label, but keeps the sensitive attribute private.
The model consists of four modules: encoder, obfuscator, target classifier, and adversary classifier. The encoder is trained using supervised and private contrastive loss to provide maximal discrimination for the classification task while protecting the private attribute. Furthermore, the encoded features are obfuscated, and the target classifier is jointly trained to maintain the classification performance. Finally, adversarial training is applied between the target classifier and the adversary classifier.

5.1. Proposed Architecture

The overall private data-sharing framework, shown in Figure 8, consists of four steps:
  • An encoder  f ϕ is pre-trained on the public data using supervised and private contrastive loss. The encoder is later used to extract discriminative representation for the targeted classification task;
  • An obfuscator  f ψ is learned to remove relevant information in the representation h about the private attribute;
  • A target classifier  g θ t is jointly trained with the obfuscator to ensure that the useful information for the intended classification task is preserved in the obfuscated representation;
  • An adversary classifier  g θ a is adversely trained to minimize the classification error for the private attribute.

5.1.1. Encoder

As displayed in Figure 3, the encoder f ϕ is initially trained with supervised and private contrastive loss to output a well-discriminated feature representation and protect the private attributes. As mentioned in the previous section, the key idea behind contrastive loss is to push negative pairs apart and pull positive ones close. In a supervised contrastive loss, the positive pairs are those with the same target labels. Maximal discrimination can thus be achieved with respect to the target class.
This concept can be further extended to preserve the privacy of private attributes by allowing minimal discrimination regarding the sensitive label. In other words, for a supervised and private contrastive loss, we will assume:
  • Positive pairs: Those with the same target label as the anchor image;
  • Negative pairs: Those with the different target labels and the same private label as the anchor image.
Therefore, for an augmented dataset of D = { ( x 1 , i , x 1 , j , y 1 , t , y 1 , p ) , ( x N , i , x N , j , y N , t , y N , p ) }, we can define the positive and negative set for each sample x k as:
P ( x k ) = { ( x l , i , x l , j ) if ( y k , t = y l , t } l = 1 N , N ( x k ) = { ( x l , i , x l , j ) if ( y k , t y l , t & y k , p = y l , p ) } l = 1 N .
The supervised and private contrastive loss based on SupCon [19] can thus be defined as:
L p r i v a t e s u p c o n = i 1 | P ( i ) | p P ( i ) log exp ( sim ( z i , z p ) ) k N ( i ) exp ( sim ( z i , z k ) ) ,
where P ( i ) and N ( i ) denote positive and negative sets with respect to sample x i . Similar to SupCon [19], Dai et al. introduced a supervised contrastive loss based on Momentum Contrast (MoCo) [31] denoted as UniCon [32]:
L u n i c o n = log 1 + { k } exp ( s k ) { k + } exp ( s k + ) ,
where s denotes the similarity score and { k } , { k + } are the subset of negative and positive pairs, respectively. Likewise, we can extend UniCon loss to take into account private and sensitive attributes as:
L p r i v a t e u n i c o n = log ( 1 + k N ( x k ) exp ( s k ) k + P ( x k ) exp ( s k + ) ) .

5.1.2. Obfuscator

The obfuscator f ψ is trained to hide sensitive and private attributes from the shared representation while keeping the relevant information regarding the target class label.

5.1.3. Target Classifier

The classifier g θ t is a lightweight neural network with three fully connected layers and Relu activation functions. The classifier is jointly trained with the obfuscator to maintain the classification accuracy for the target class label:
( θ ^ t , ψ ^ ) = argmin θ t , ψ L C E ( y t , y ^ t ) ,
where L C E indicates the cross-entropy between the target attribute y t and its estimate  y ^ t .

5.1.4. Adversary Classifier

The adversary classifier g θ a plays the role of an attacker attempting to infer private attributes using the eavesdropped features. We simulate a game between the adversary and the defender through an adversarial training procedure. The attacker tries to minimize the classification error for the private attributes as:
θ ^ a = argmin θ a L C E ( y p , y ^ p ) .
Meanwhile, the defender aims to degrade the performance of the adversary classifier and minimize the private attribute leakage while maintaining good performance on the target classification task. Hence:
ψ ^ = argmin ψ L C E ( y t , y ^ t ) γ L C E ( y p , y ^ p ) ,
where γ is the utility-privacy trade-off parameter. Algorithm 1 delineates the overall steps in our proposed adversarial training procedure.
Algorithm 1 Adversarial Training Procedure
  Input: dataset D and parameter γ
  Output: ϕ , ψ , θ t , θ a
1:
for every epoch do
2:
   Sample a minibatch from dataset
3:
   Train ϕ using L p r i v a t e s u p c o n or L p r i v a t e u n i c o n in Equations (15) and (16)
4:
end for
5:
for every epoch do
6:
   Sample a minibatch from dataset
7:
   Train ψ to minimize L C E ( y t , y ^ t ) γ L C E ( y p , y ^ p )
8:
   Train θ a to minimize L C E ( y p , y ^ p )
9:
   Train θ t to minimize L C E ( y t , y ^ t )
10:
end for

5.2. Experimental Results

This section analyzes the effectiveness of the proposed framework. For the rest of this section, we refer to utility as the classification accuracy on the target class label. Similarly, privacy is defined as the classification performance on the private and sensitive attribute.

5.2.1. Experimental Setup

Dataset: We conducted experiments on a celebrity face image dataset, CelebA [27], which consists of over 20,000 celebrity images, where each image is annotated with 40 attributes. Every input image is center-cropped by 178 × 178 and then resized to 128 × 128 . We select the “gender” attribute for our intended classification task and “age” with two classes of young and old as the sensitive attribute.
Attacker setup: The adversary has a set of publicly available protected representations h p with the corresponding original images x and their protected labels y p and aims to train a classifier to re-identify the protected attribute.
Defender setup: The primary goal of the defender is two-fold: the defender aims to preserve the high accuracy of classification expressed by “target accuracy” with respect to the utility attribute y t . At the same time, the defender wishes to decrease the correct classification accuracy on the attacker’s side, which is represented by “private accuracy” with respect to the protected attribute y p . The privacy utility trade-off is controlled by different values of γ in Equation (20). This trade-off is best achieved when, firstly, the publicly available representation h p is discriminative with respect to the target attribute. Secondly, there needs to be an obfuscation mechanism to remove relevant information in h p regarding the private attribute.

5.2.2. Impact of the Obfuscator

In this section, we investigate the impact of the obfuscator. Therefore, keeping the encoder constant, we design an attribute inference attack to classify the private and sensitive attribute with and without the obfuscator. To analyze the privacy trade-off, we experimented with different values of γ in Equation (20), and the results are reported in Table 3.
As shown in Table 3, the classification accuracy significantly drops when the obfuscation is applied, thus validating the effectiveness of the obfuscator module. The obtained results show that the decline in utility is significantly small with only a 0.3–0.7% decrease in target accuracy. Moreover, the increase in γ decreases the private classification accuracy. However, in view of privacy protection, random guessing is the ultimate goal in a binary classification setting, as the adversary can flip his guess for any accuracy lower than the random guessing threshold. In order to account for this, the flipping accuracies are also reported in the last row of Table 3 accordingly. For the CelebA dataset, the class label “age” is slightly imbalanced and distributed as 75–25%; thereby, the corresponding random guessing threshold is 62.5% (0.75 × 0.75 + 0.25 × 0.25 = 0.625). Therefore, from a privacy protection point of view, the best result is obtained for γ = 1 for UniCon loss.

5.2.3. Privacy-Utility Trade-Off Comparison

To better evaluate the effectiveness of the proposed framework model, the privacy–utility trade-off for different scenarios has been investigated. The results in Table 3 validate the effectiveness of the obfuscator module. Putting the obfuscator aside, we are interested in analyzing the impact of using supervised and private loss compared to the conventional contrastive loss in Equation (2). To evaluate that, we considered the following scenarios:
  • h : the feature representations of original images from an encoder trained with a conventional contrastive loss in Equation (2);
  • h x noisy : the feature representations of noisy images from an encoder trained with a conventional contrastive loss in Equation (2);
  • h noisy : noisy feature representations of original images from an encoder trained with a conventional contrastive loss in Equation (2);
  • h private unicon : the feature representations of original images from an encoder trained with private UniCon loss in Equation (16);
  • h private supcon : the feature representations of original images from an encoder trained with private SupCon loss in Equation (15);
  • h p unicon : the obfuscated and protected features of the proposed framework using UniCon loss in Equation (16);
  • h p supcon : the obfuscated and protected features of the proposed framework using SupCon loss in Equation (15).
The privacy–utility tradeoff in the form of target and private accuracy for various settings is reported in Table 4. The final accuracies were flipped in cases lower than the random guessing threshold for a fair comparison.
Impact of supervised and private contrastive loss: As reported in Table 4, the accuracy on the target class is higher for both cases of SupCon and Unicon compared to the unsupervised contrastive loss. This is mainly due to the fact that there was no label information used in the conventional contrastive loss (Equation (2)). In addition, the accuracy on the private attribute is 4% lower in h private supcon and h private supcon compared to h , showing benefits of using supervised and private loss.
Impact of adding noise: Adding noise to raw images or extracted features can be considered as a defense mechanism. Injecting Gaussian noise into the data has been widely used in federated learning [33,34]. Indeed, the results in Table 4 demonstrate that the privacy increases as we add noise to the images or the features. Moreover, raising the variance of the noise leads to more privacy gain. However, the private classification accuracies for noisy data are still far from the results we can achieve using the proposed framework. Besides, by adding noise, we also lose utility as the target accuracy drops.
Comparison to DeepObfuscator [16]: We carefully explored and examined other papers in state-of-the-art for a fair comparison. Unfortunately, the differences in the problem formulation make this comparison difficult and unfair in some cases. For example, several works have studied the privacy leakage of a face verification system different from the attribute classification problem formulation. In ref. [35], the authors proposed an adversarial framework for reducing gender information in the final embedding vectors used for the verification system. Hence, we can argue that even though the privacy task of attribute leakage in the embeddings is the same, the utility is defined differently, thereby making the comparison infeasible.
Moreover, several studies have investigated the same utility–privacy formulation as our proposed framework. However, they differ in their overall setting. For example, Boutet et al. [36] proposed a privacy-preserving framework against attribute inference attacks in a federated learning setting. In their experiments, the main target label is “smiling,” while the protected label is the “gender” of users.
Nevertheless, a very similar problem formulation and setting are studied in ref. [16]. Li et al. [16] exploit an adversarial game to maintain the classification performance on the public class label while preserving against an attribute-inference attack. As they have used different attributes as the target and private, we re-run their obfuscator model for our public and private attributes. The DeepObfuscator model in [16] is further adapted to only consider the attribute inference attack. The results reported in Table 4 demonstrate the superior performance of the proposed method compared to DeepObfucator.

6. Conclusions

This paper addressed the problem of template protection against the most commonly used attacks, namely, reconstruction and attribute inference attacks. Two defense frameworks based on contrastive learning were proposed.
For defense against the reconstruction attack, we directly minimize the correlation and dependencies of encoded features with the original data, avoiding the unnecessary complications of a min-max adversarial training. Furthermore, training an encoder with the supervised contrastive loss would minimize discrimination in the feature space and remove redundant information about the original images. Hence, there is no substantial loss in classification performance, and the proposed framework provides a better utility-privacy trade-off.
In the attribute inference attack, the adversary wishes to access the private attribute given the shared protected templates. Therefore, in the first defense step, we propose an encoder trained with the supervised and private contrastive loss. Furthermore, an obfuscator module is trained in an adversarial manner to preserve the privacy of private attributes while maintaining a good classification performance. The reported results on the CelebA dataset validate the effectiveness of the proposed framework. The future work aims at designing a framework based on contrastive loss considering both reconstruction and attribute inference attacks. Another interesting avenue of research is to investigate the performance of the proposed framework on other datasets.

Author Contributions

Conceptualization, S.R. and S.V.; methodology, S.R. and V.K.; data curation, M.A.J.; supervision, S.V.; investigation, S.R.; validation, S.R. and V.K. and M.A.J.; visualization, S.R. and M.A.J.; writing—original draft, S.R.; Writing—review & editing, S.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tanuwidjaja, H.C.; Choi, R.; Baek, S.; Kim, K. Privacy-Preserving Deep Learning on Machine Learning as a Service—A Comprehensive Survey. IEEE Access 2020, 8, 167425–167447. [Google Scholar] [CrossRef]
  2. Mai, G.; Cao, K.; Yuen, P.C.; Jain, A.K. On the Reconstruction of Face Images from Deep Face Templates. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 1188–1202. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Hill, S.; Zhou, Z.; Saul, L.; Shacham, H. On the (in) effectiveness of mosaicing and blurring as tools for document redaction. Proc. Priv. Enhancing Technol. 2016, 2016, 403–417. [Google Scholar] [CrossRef] [Green Version]
  4. Boracchi, G.; Foi, A. Modeling the performance of image restoration from motion blur. IEEE Trans. Image Process. 2012, 21, 3502–3517. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Vishwamitra, N.; Knijnenburg, B.; Hu, H.; Kelly Caine, Y.P. Blur vs. block: Investigating the effectiveness of privacy-enhancing obfuscation for images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 39–47. [Google Scholar]
  6. Tekli, J.; Al Bouna, B.; Couturier, R.; Tekli, G.; Al Zein, Z.; Kamradt, M. A Framework for Evaluating Image Obfuscation under Deep Learning-Assisted Privacy Attacks. In Proceedings of the 2019 17th International Conference on Privacy, Security and Trust (PST), Fredericton, NB, Canada, 26–28 August 2019; pp. 1–10. [Google Scholar]
  7. McPherson, R.; Shokri, R.; Shmatikov, V. Defeating image obfuscation with deep learning. arXiv 2016, arXiv:1609.00408. [Google Scholar]
  8. Li, T.; Lin, L. AnonymousNet: Natural Face De-Identification With Measurable Privacy. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 56–65. [Google Scholar]
  9. Ren, Z.; Lee, Y.J.; Ryoo, M.S. Learning to anonymize faces for privacy preserving action detection. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 620–636. [Google Scholar]
  10. Li, J.; Han, L.; Chen, R.; Zhang, H.; Han, B.; Wang, L.; Cao, X. Identity-Preserving Face Anonymization via Adaptively Facial Attributes Obfuscation. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual Event, China, 20–24 October 2021; pp. 3891–3899. [Google Scholar]
  11. Ogburn, M.; Turner, C.; Dahal, P. Homomorphic encryption. Procedia Comput. Sci. 2013, 20, 502–509. [Google Scholar] [CrossRef] [Green Version]
  12. Ossia, S.A.; Taheri, A.; Shamsabadi, A.S.; Katevas, K.; Haddadi, H.; Rabiee, H.R. Deep Private-Feature Extraction. IEEE Trans. Knowl. Data Eng. 2020, 32, 54–66. [Google Scholar] [CrossRef] [Green Version]
  13. Li, M.; Lai, L.; Suda, N.; Chandra, V.; Pan, D.Z. Privynet: A flexible framework for privacy-preserving deep neural network training. arXiv 2017, arXiv:1709.06161. [Google Scholar]
  14. Pittaluga, F.; Koppal, S.; Chakrabarti, A. Learning privacy preserving encodings through adversarial training. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 26–28 August 2019; pp. 791–799. [Google Scholar]
  15. Liu, S.; Du, J.; Shrivastava, A.; Zhong, L. Privacy adversarial network: Representation learning for mobile data privacy. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies; Association for Computing Machinery: New York, NY, USA, 2019; Volume 3, pp. 1–18. [Google Scholar]
  16. Li, A.; Guo, J.; Yang, H.; Salim, F.D.; Chen, Y. DeepObfuscator: Obfuscating Intermediate Representations with Privacy-Preserving Adversarial Learning on Smartphones. arXiv 2019, arXiv:1909.04126. [Google Scholar]
  17. Huang, C.; Kairouz, P.; Chen, X.; Sankar, L.; Rajagopal, R. Context-aware generative adversarial privacy. Entropy 2017, 19, 656. [Google Scholar] [CrossRef] [Green Version]
  18. Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning; PMLR: London, UK, 2020; pp. 1597–1607. [Google Scholar]
  19. Khosla, P.; Teterwak, P.; Wang, C.; Sarna, A.; Tian, Y.; Isola, P.; Maschinot, A.; Liu, C.; Krishnan, D. Supervised contrastive learning. arXiv 2020, arXiv:2004.11362. [Google Scholar]
  20. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Gretton, A.; Bousquet, O.; Smola, A.; Schölkopf, B. Measuring statistical dependence with Hilbert-Schmidt norms. In International Conference on Algorithmic Learning Theory; Springer: Berlin/Heidelberg, Germany, 2005; pp. 63–77. [Google Scholar]
  22. Gretton, A.; Fukumizu, K.; Teo, C.H.; Song, L.; Schölkopf, B.; Smola, A.J. A kernel statistical test of independence. In Proceedings of the 20th International Conference on Neural Information Processing Systems (NIPS 2007), Vancouver British, BC, Canada, 3–6 December 2007; pp. 585–592. [Google Scholar]
  23. Székely, G.J.; Rizzo, M.L.; Bakirov, N.K. Measuring and testing dependence by correlation of distances. Ann. Stat. 2007, 35, 2769–2794. [Google Scholar] [CrossRef]
  24. Sun, J.; Yao, Y.; Gao, W.; Xie, J.; Wang, C. Defending against Reconstruction Attack in Vertical Federated Learning. arXiv 2021, arXiv:2107.09898. [Google Scholar]
  25. Kasieczka, G.; Shih, D. Robust Jet Classifiers through Distance Correlation. Phys. Rev. Lett. 2020, 125, 122001. [Google Scholar] [CrossRef]
  26. Vepakomma, P.; Singh, A.; Gupta, O.; Raskar, R. NoPeek: Information leakage reduction to share activations in distributed deep learning. arXiv 2020, arXiv:cs.LG/2008.09161. [Google Scholar]
  27. Liu, Z.; Luo, P.; Wang, X.; Tang, X. Deep Learning Face Attributes in the Wild. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 3730–3738. [Google Scholar]
  28. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
  29. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  30. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar]
  31. He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual Event, 13–19 June 2020; pp. 9729–9738. [Google Scholar]
  32. Dai, Z.; Cai, B.; Lin, Y.; Chen, J. UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning. arXiv 2021, arXiv:2103.10773. [Google Scholar]
  33. Papernot, N.; Song, S.; Mironov, I.; Raghunathan, A.; Talwar, K.; Erlingsson, Ú. Scalable private learning with pate. arXiv 2018, arXiv:1802.08908. [Google Scholar]
  34. Truex, S.; Baracaldo, N.; Anwar, A.; Steinke, T.; Ludwig, H.; Zhang, R.; Zhou, Y. A hybrid approach to privacy-preserving federated learning. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, London, UK, 15 November 2019; pp. 1–11. [Google Scholar]
  35. Dhar, P.; Gleason, J.; Souri, H.; Castillo, C.D.; Chellappa, R. Towards gender-neutral face descriptors for mitigating bias in face recognition. arXiv 2020, arXiv:2006.07845. [Google Scholar]
  36. Boutet, A.; Lebrun, T.; Aalmoes, J.; Baud, A. MixNN: Protection of Federated Learning against Inference Attacks by Mixing Neural Network Layers. arXiv 2021, arXiv:2109.12550. [Google Scholar]
Figure 1. Threat model. The user sends the private representations to the server for final classification. Eavesdropping on the private features, the adversary wishes to reconstruct the original data or infer sensitive attributes. The adversary does not have access to the local obfuscation mechanism used by the user, shown in blue dashed lines.
Figure 1. Threat model. The user sends the private representations to the server for final classification. Eavesdropping on the private features, the adversary wishes to reconstruct the original data or infer sensitive attributes. The adversary does not have access to the local obfuscation mechanism used by the user, shown in blue dashed lines.
Entropy 24 00643 g001
Figure 2. General diagram of the proposed framework for defense against reconstruction attack. L C E denotes cross-entropy and L Corr stands for a similarity metric.
Figure 2. General diagram of the proposed framework for defense against reconstruction attack. L C E denotes cross-entropy and L Corr stands for a similarity metric.
Entropy 24 00643 g002
Figure 3. Encoder training using supervised contrastive learning.
Figure 3. Encoder training using supervised contrastive learning.
Entropy 24 00643 g003
Figure 4. T-SNE visualization of output features for unsupervised and supervised contrastive losses.
Figure 4. T-SNE visualization of output features for unsupervised and supervised contrastive losses.
Entropy 24 00643 g004
Figure 5. Adversary model for reconstruction attack.
Figure 5. Adversary model for reconstruction attack.
Entropy 24 00643 g005
Figure 6. Visual performance of the reconstruction attack from different features. First row: h , second row: h x noisy , third row: h noisy , and the last row: h p for DistCorr, γ = 20 .
Figure 6. Visual performance of the reconstruction attack from different features. First row: h , second row: h x noisy , third row: h noisy , and the last row: h p for DistCorr, γ = 20 .
Entropy 24 00643 g006
Figure 7. Visual performance of reconstructed output from the protected features for different correlation measures. Firs row: original images, row 2, 3: DistCorr for γ = 2 , 20 , row 4, 5: HSIC for γ = 2 , 20 .
Figure 7. Visual performance of reconstructed output from the protected features for different correlation measures. Firs row: original images, row 2, 3: DistCorr for γ = 2 , 20 , row 4, 5: HSIC for γ = 2 , 20 .
Entropy 24 00643 g007
Figure 8. General diagram of the proposed framework for defense against an attribute inference attack.
Figure 8. General diagram of the proposed framework for defense against an attribute inference attack.
Entropy 24 00643 g008
Table 1. Classification vs. Correlation.
Table 1. Classification vs. Correlation.
Correlation TypeAccuracyDistCorrHSIC
without98.480.7140.62
DistCorr, γ = 2 98.20.240.25
DistCorr, γ = 20 98.10.210.23
HSIC, γ = 2 98.230.320.026
HSIC, γ = 20 98.170.290.007
Table 2. Image reconstruction comparison.
Table 2. Image reconstruction comparison.
ObfuscationSSIMMSSIMAccuracy
h 0.40.5698.48
h x noisy 0.360.5098.41
h noisy 0.300.4398.37
h p 0.190.1698.2
Table 3. Classification accuracy on the CelebA dataset on target and private attributes for UniCon and SupCon loss and different values of γ .
Table 3. Classification accuracy on the CelebA dataset on target and private attributes for UniCon and SupCon loss and different values of γ .
AccuracyUniConSupconw/o obfs.
γ = 1 γ = 2 γ = 10 γ = 1 γ = 2 γ = 10
target accuracy98.3798.3498.3098.3398.3198.3098.34
private accuracy32.525.3219.5825.3225.3217.8982.2
100 − private accuracy67.574.6880.4274.6874.6882.11-
Table 4. Privacy-Utility trade-off.
Table 4. Privacy-Utility trade-off.
Accuracy
Target AccuracyPrivate Accuracy
h 98.2486.3
h x noisy 98.0386.05
h noisy 97.6184.5
h private unicon 98.3882.23
h private supcon 98.3482.2
our: h p unicon and γ = 1 98.3067.5
our: h p supcon and γ = 1 98.3074.68
DeepObfuscator [16]97.7576.03
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rezaeifar, S.; Voloshynovskiy, S.; Asgari Jirhandeh, M.; Kinakh, V. Privacy-Preserving Image Template Sharing Using Contrastive Learning. Entropy 2022, 24, 643. https://doi.org/10.3390/e24050643

AMA Style

Rezaeifar S, Voloshynovskiy S, Asgari Jirhandeh M, Kinakh V. Privacy-Preserving Image Template Sharing Using Contrastive Learning. Entropy. 2022; 24(5):643. https://doi.org/10.3390/e24050643

Chicago/Turabian Style

Rezaeifar, Shideh, Slava Voloshynovskiy, Meisam Asgari Jirhandeh, and Vitality Kinakh. 2022. "Privacy-Preserving Image Template Sharing Using Contrastive Learning" Entropy 24, no. 5: 643. https://doi.org/10.3390/e24050643

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop