Effects of Image Compression on Face Image Manipulation Detection: A Case Study on Facial Retouching

In the past years, numerous methods have been introduced to reliably detect digital face image manipulations. Lately, the generalizability of these schemes has been questioned in particular with respect to image post-processing. Image compression represents a post-processing which is frequently applied in diverse biometric application scenarios. Severe compression might erase digital traces of face image manipulation and hence hamper a reliable detection thereof. In this work, the effects of image compression on face image manipulation detection are analyzed. In particular, a case study on facial retouching detection under the influence of image compression is presented. To this end, ICAO-compliant subsets of two public face databases are used to automatically create a database containing more than 9,000 retouched reference images together with unconstrained probe images. Subsequently, reference images are compressed applying JPEG and JPEG 2000 at compression levels recommended for face image storage in electronic travel documents. Novel detection algorithms utilizing texture descriptors and deep face representations are proposed and evaluated in a single image and differential scenario. Results obtained from challenging cross-database experiments in which the analyzed retouching technique is unknown during training yield interesting findings: (1) most competitive detection performance is achieved for differential scenarios employing deep face representations; (2) image compression severely impacts the performance of face image manipulation detection schemes based on texture descriptors while methods utilizing deep face representations are found to be highly robust; (3) in some cases, the application of image compression might as well improve detection performance.


Effects of Image Compression on Face Image Manipulation Detection: A Case Study on Facial Retouching
Christian Rathgeb, Kevin Bernardo, Nathania E. Haryanto, Christoph Busch Abstract-In the past years, numerous methods have been introduced to reliably detect digital face image manipulations. Lately, the generalizability of these schemes has been questioned in particular with respect to image post-processing. Image compression represents a post-processing which is frequently applied in diverse biometric application scenarios. Severe compression might erase digital traces of face image manipulation and hence hamper a reliable detection thereof.
In this work, the effects of image compression on face image manipulation detection are analyzed. In particular, a case study on facial retouching detection under the influence of image compression is presented. To this end, ICAO-compliant subsets of two public face databases are used to automatically create a database containing more than 9,000 retouched reference images together with unconstrained probe images. Subsequently, reference images are compressed applying JPEG and JPEG 2000 at compression levels recommended for face image storage in electronic travel documents. Novel detection algorithms utilizing texture descriptors and deep face representations are proposed and evaluated in a single image and differential scenario. Results obtained from challenging cross-database experiments in which the analyzed retouching technique is unknown during training yield interesting findings: (1) most competitive detection performance is achieved for differential scenarios employing deep face representations; (2) image compression severely impacts the performance of face image manipulation detection schemes based on texture descriptors while methods utilizing deep face representations are found to be highly robust; (3) in some cases, the application of image compression might as well improve detection performance.
Index Terms-Face Image Manipulation Detection, Image Compression, Retouching, Face Recognition.

I. INTRODUCTION
Digital face manipulation has rapidly advanced in the past years and many different methods have been proposed such as morphing [1], [2], swapping [3], [4], or retouching [5]. This may lead to a loss of trust in digital content and can cause further harm by spreading false information or fake news as well as attacking face recognition systems. In the recent past, numerous image manipulation detection schemes have been proposed in the scientific literature, for surveys the interested reader is referred to [6], [7]. For such face manipulation detection algorithms, the detection performance can highly depend on the quality of the manipulated image as well as applied image post-processing. It was found that image compression can impact face recognition performance [8] as well as detection methods [9], [10]. More precisely, the application of image compression can hamper the extraction of detailed textural information which might represent a powerful source of information for manipulation detection. Additionally, artefacts resulting from the manipulation process might be vanished by severe image compression. Similar effects are to be expected for other types of post-processing, e.g. color-space transformations or even print-scan transformations, which might be applied less frequently. From a practical point of view, robustness of detection methods against image compression is of high importance since image compression is applied to facial images in various application scenarios, e.g. storage of face images in electronic travel documents. Among proposed face manipulation techniques, facial retouching, a.k.a. "photoshopping", represent one of the most prominent ones. Retouching methods have become common tools which are frequently used to enhance one's facial appearance, e.g. prior to sharing face images via social media. Retouching of face images in the digital domain causes alterations similar to those achieved by plastic surgery [11], [12] or makeup [13] which have already been shown to have negative effects on face recognition. Beyond that, further changes can be made to face images in the digital domain, e.g. enlarging of the eyes. Besides professional image editing software, e.g. Photoshop, there exist plenty of mobile applications, i.e. apps, which provide many filters and special beautification effects that can be applied easily even by unskilled users. Hence, retouching methods represent an easy-to-use face image manipulation technique of high relevance. Fig. 1 shows an example of facial retouching. It can be observed that retouching usually results in local as well as global changes with the aim of an overall natural appearance. It was found that human observers achieve only low accuracy in detecting such face image manipulations [14], [15] which necessitates the development of automated procedures with the aim of reliably detecting said manipulations. Moreover, alterations induced by retouching have been shown to represent a challenge for face recognition [5]. Towards deploying secure face recognition and enforcing anti-photoshop legislations, a reliable detection of retouched face image is of utmost importance. Ferrara et al. [17], [21] firstly measured the impact of retouching on facial recognition systems. They reported significant performance degradation for various facial recognition systems after the application of strong facial retouching. These findings have been confirmed by Bharati et al. [18], [19] while Rathgeb et al. [5], [10] mentioned that face recognition systems might be robust to the application of moderate facial retouching.
Different facial retouching detection procedures were proposed in the scientific literature, too. Table I lists the most important works examining the effects of facial retouching on facial recognition, along with proposed detection systems, used databases, applied methods, and reported results. Performance rates are mostly reported using standardized metrics for measuring biometric performance [22], e.g. Equal Error Rate (EER) or Rank-1 Identification Rate (R-1). For detection schemes the Correct Classification Rate (CCR), which corresponds to the Detection Equal Error Rate (D-EER), is frequently used.
To distinguish between unaltered and retouched facial images Bharati et al. [18], [19] proposed different deep learningbased techniques. To this end, a sufficient number of retouched facial images was automatically generated for training purposes. A deep learning approach to detecting any kind of facial retouching (including GAN-based changes) was proposed by Jain et al. [9]. In terms of retouching detection, impressive performance rates (>99% CCR) were reported when train-ing and test were performed on disjunctive subsets of the database introduced in [18]. More recently, Wang et al. [15] introduced a deep learning-based facial retouching detection scheme which is specifically designed to detect image warping operations performed using the Adobe Photoshop software. Rathgeb et al. [10] proposed a facial retouching detection scheme which makes use of well-established image forensics techniques. Specifically, different spatial and spectral features extracted from Photo Response Non-Uniformity (PRNU) patterns across image regions are analyzed. In contrast to the aforementioned approaches, Rathgeb et al. [20] suggested a differential detection scheme in which a suspected images and an additional trusted image serve as input to the detection system. Different feature types, including texture descriptors, facial landmarks, and deep face features are extracted from image pairs and difference vectors are classified employing SVMs. It is shown that a fusion of all feature types yields the lowest detection error rates. By employing a differential detection scheme competitive detection performance can be achieved, even in a cross-database scenario where the employed retouching algorithm is not known by the detection algorithm.
In this work, we investigate the effects of image compression on detection methods for face image manipulation based on facial retouching. To this end, subsets of the FERET and FRGCv2 face database are used to automatically create a database containing 9,078 retouched face images together with unconstrained probe images. JPEG [23] and JPEG 2000 [24] are then used to compress reference images at levels which comply with the requirements of the International Civil Aviation Organization (ICAO) for electronic travel documents. Single image and differential retouching detection scenarios are considered, where in the latter case a trusted (but unconstrained) probe image is additionally available during detection. This scenario, which allows the estimation of differences between a processed image pair, is motivated by the assumption that in many real-world scenarios, e.g.  FERET  529  200  329  529  529  791  791   FRGCv2  533  231  302  984  984  1,726  3,298 automated border control, it is plausible that at least one other unaltered image of a depicted subject is available during detection. Used retouching detection methods make use of texture descriptors, i.e. Binarized Statistical Image Features (BSIF) [25], and deep face representations extracted by the ArcFace algorithm [26]. Detection performance is evaluated before and after the application of image compression in cross-database experiments. More precisely, we focus on the realistic scenario in which the image source and the potentially applied retouching algorithm are unknown during the training stage. Obtained results show that retouching detection methods based on texture descriptors are highly impacted by alterations induced by image compression. In contrast, the use of deep face representations provides high robustness towards image compression and achieves competitive detection performance in a differential scenario. This article is organized as follows: The used databases containing retouched face images are summarized in Sect. II. Subsequently, the analyzed single image and differential retouching detection methods are described in Sect. III. The experimental setup and results are presented in Sect. IV. Finally, a conclusion is given in Sect. V.

II. RETOUCHED FACE DATABASES
Two subsets of publicly available face image databases, i.e. FERET [27] and FRGCv2 [16], were employed. The selection of reference and probe images is summarized in the following subsection (Sect. II-A). Subsequently, the generation of retouched face images (Sect. II-B), and the application of image compression (Sect. II-C) is described.

A. Reference and probe images
As reference face images, good-quality frontal faces with mostly neutral expression have been manually selected. In addition, compliance with the specifications of ICAO has been assured. Particularly, an inter-eye-distance of at least 90 pixels in the facial image has to be fulfilled [28]. Generally speaking, it is rather unlikely that low-quality face images (often referred to as faces in the wild) are manipulated using retouching algorithms, since strongly unconstrained face images are usually captured in non-cooperative environments, e.g. surveillance scenario. Moreover, this case study particularly focuses on the manipulation of face images which could be subsequently be used in the issuance process of electronic travel documents. In addition, probe images were chosen which exhibit more variations, e.g. in pose, expression, focus and illumination. If feasible, probe images were selected from different capture session to achieve a realistic scenario. Examples of probe and reference images of the two resulting subsets are shown in Fig. 2. The number of subjects, corresponding reference and probe images, as well as the resulting number of single imagebased and differential detections are listed in Table II. At the pre-processing stage, facial images are normalized by applying adequate scaling, rotation and padding/cropping to achieve an alignment w.r.t. the eyes' positions. More specifically, landmarks are detected using the dlib method [29] and alignment is performed w.r.t. the detected eye coordinates with a fixed position and an intra-eye distance of 90 pixels, which results in an image resolution of 360×480 pixels.

B. Automatic Retouching
To apply retouching to the reference images various apps which are freely available in the Google PlayStore [36] were chosen. Note that emphasis was put on free apps since these are more likely to be employed by users in contrast to costly desktop applications which have been used in related works, e.g. [18], [19]. Further, the users' ratings of selected apps and the number of downloads were considered as selection criteria. Moreover, it is verified that the apps yield results of sufficient quality, i.e. apps which produce dollish looking face images are not considered. Finally, easy-to-use apps were favored since these apps facilitate an automatic generation of retouched images and allow for an (all-in-one) automatic beautification.
Based on aforementioned criteria the following six apps were selected for the creation of the database. Table III lists said apps and their properties. Examples of applications of each selected app to a male and a female face image are shown in Fig. 3.
The automated creation of retouched face images was implemented on a Samsung Galaxy S6 device with Android version 7.0 and an Apple MacBook Pro. To this end, the Automate app [37], was employed to apply FotoRus and InstaBeauty to all reference face images of both subsets of the databases. For all remaining apps a desktop click recording software named Cliclick [38] was applied together with the Android app ApowerMirror [39]. The latter app allows a mirroring    of a smartphone device to a desktop device. This automated process resulted in a total number of (529+984)×6=9,078 retouched face images. The described database of retouched face images was first introduced in [20].

C. Image Compression
Image compression represents a well-studied field in face recognition [8], [40], [41]. Compression algorithms as well as compression ratios used in this work are based on the recommendations of ICAO [28]. Studies undertaken using standard photograph images but with different vendor algorithms and JPEG [23] and/or JPEG 2000 [24] compression showed the minimum practical image size for an ICAO-standardized face image to be approximately 12 kB of data [8]. Higher compression beyond this size is expected to result in significantly less reliable facial recognition results. Twelve kilobytes cannot always be achieved as some images compress more than others at the same compression ratio -depending on factors such as clothes, coloring and hair style. In practice, facial image average compressed sizes in the 15 kB to 20 kB range should be the optimum for use in electronic travel documents [28]. The JPEG 2000 compression standard generally outperforms JPEG in terms of Peak-Signal-to-Noise-Ratio (PSNR) ratedistortion behavior. Hence, in practical applications JPEG is applied at lower compression levels.
In order to resemble realistic applications of image compression we applied JPEG 2000 at compression levels to achieve average file sizes of 15 kB. Due to the aforementioned reasons, JPEG compression is applied at slightly lower compression levels leading to an average file size of 20 kB. Example images of compressed bona fide and retouched face images are shown in Fig. 3. Based on human perception, no clearly visible artefacts are caused by the applied compression. Therefore, no impact on face recognition performance is to be expected for the applied compression levels. Fig. 4 shows closeups of a high frequency part of the bona fide image of Fig. 3 where slight blocking and blurring artefacts become visible for JPEG and JPEG 2000 compression, respectively. Obviously, higher compression level are expected to cause stronger artefacts. However, from a practical point of view, the used compression levels are more relevant due to the aforementioned reasons.

III. RETOUCHING DETECTION
For the task of facial retouching detection we employ novel methods for both, a single image-based scenario as well as in a differential scenario, see Fig. 5. In both scenarios, face representations are extracted employing different feature extractors (Sect. III-A). Subsequently, machine learning-based classifiers are trained to distinguish between bona fide (unaltered) and retouched face images (Sect. III-B).

A. Feature Extractors
The following two types of features are extracted from a pair of reference and probe face images: 1) Texture descriptors (TD): the pre-processed face images are cropped to 160×160 pixels centered around the tip of the nose. Subsequently, facial crops are converted to a grayscale image.
In the feature extraction stage, the pre-processed face image is first divided into 4×4 cells in order to retain local information. BSIF [25] represents a popular generic texture descriptor employing filters learned from natural images. BSIF has been found to be a powerful feature for texture classification. Especially in the research field of biometric recognition, BSIF has gained attention as it has been successfully applied to perform various biometric tasks based on diverse biometric characteristics.  [26], [42] is used in order to obtain deep face representations are extracted from a reference and probe image. This algorithm is based on the ResNet-50 convolutional neural network architecture and employs Additive Angular Margin Loss (ArcFace) to obtain highly discriminative features for face recognition. On various challenging datasets it was shown to achieve stateof-the-art recognition performance. As feature extractor, the publicly available pre-trained deep network is applied i.e. the deep representations extracted by the neural network (on the lowest layer). Since this applies an internal pre-processing, no cropping (or grayscale conversion) is employed before the feature extraction process. Resulting feature vectors extracted from the reference and probe face image consist of 512 floats .
To learn rich and compact representations of faces, deep face recognition systems leverage huge databases of face images. Alterations resulting form facial retouching will also be reflected in extracted deep face representations. It is expected that such changes are more pronounced in case anatomical alterations are induced through retouching, since deep face recognition systems exhibit high generalization capabilities w.r.t. textural changes of skin.
In a single image-based detection system, the detector processes only the reference image, e.g. an off-line authenticity check of an electronic travel document (this scenario is also referred to as no-reference scenario). For this detection approach, the extracted feature vectors are directly analyzed.
In contrast, in the differential detection systems, a trusted live capture from an authentication attempt serves as additional source of information for the detector, e.g. during authentication at an automated border control gate. This information is utilized by estimating the vector differences between feature vectors extracted from processed pairs of images. Specifically, an element-wise subtraction of feature vectors is performed. It is expected that differences in certain elements of difference vectors indicate retouching. Note that all information extracted by single image-based detectors might as well be leveraged within this scenario.

B. Classification
SVMs with Radial Basis Function (RBF) kernels are used to distinguish between bona fide and retouched face images. In order to train SVMs, the scikit-learn library [43] is applied. Since the feature elements of extracted feature vectors are expected to have different ranges, data-normalization is employed. Data-normalisation turned out to be of high importance in cross-database experiments. It aims to rescale the feature elements to exhibit a mean of 0 and a standard deviation of 1. At the time of training, a regularization parameter of C = 1 and a kernel coefficient Gamma of 1/n is used, where n represents the number of feature elements. SVMs return a detection score in [0, 1].

IV. EXPERIMENTAL RESULTS
The following subsections summarize the used evaluation methodology (Sect. IV-A), report obtained detection results (Sect. IV-B), and provide a discussion including key observations (Sect. IV-C).

A. Setup and Evaluation Metrics
During the training we employ all but one retouching app which is subsequently used in the testing stage, i.e. a potentially applied retouching algorithm is unknown at testing. In other words a leave-on-out strategy is apply for retouching algorithms. Such a scenario better reflects a real-world case in which it must not be assumed that the potentially applied retouching algorithm is known beforehand. In this setting, retouched images are alternately chosen from the sets of retouched face images which are not used during testing. For example, in case testing is done for AirBrush, this algorithm will not be used during training in which the first retouched image is selected from the BeautyPlus set, the second from the Bestie set and so on and so forth. Moreover, training is conducted using only original (uncompressed) images while in the testing stage compressed are used, too.
Metrics defined for presentation attack detection in ISO/IEC 30107-3 [44] are applied to report the performance of the detection algorithms: the Attack Presentation Classification Error Rate (APCER) is defined as the proportion of attack presentations using the same presentation attack instrument species incorrectly classified as bona fide presentations in a specific scenario. The Bona Fide Presentation Classification Error Rate (BPCER) is defined as the proportion of bona fide presentations incorrectly classified as presentation attacks in a specific scenario. The D-EER, i.e. the operation point where detection accuracy APCER = BPCER, is reported for different detection methods.

B. Performance Estimation
Obtained detection performance rates for the single imagebased detection methods are summarized in Table IV. Corresponding DET curves are depicted in Fig. 6 -Fig. 9. It can be observed that the detection performance on original images highly varies for both feature extraction methods across retouching apps, i.e. D-EERs between 1% and 40% are obtained, see Fig. 6 (a) -Fig. 9 (a). For the BSIFbased detector, competitive results are achieved for detecting images which have been retouched applying Bestie or AirBrush, see Fig. 6 (a) and Fig. 7 (a), which perform severe textural alterations on the entire face region, i.e. skin smoothing. Further, high detection performance is achieved for BeautyPlus and FotoRus. Similarly, the ArcFace-based detector yields high accuracy for retouching algorithms such as FotoRus or InstanBeauty, see Fig. 8 (a) and Fig. 9 (a). Face images retouched by said algorithms exhibit anatomical changes, e.g. thinner nose. Higher error rates can be observed for retouching methods which yield only minor alterations, e.g. YouCamPerfect. On Average, moderate performance is obtained in the single image-based scenario with average D-EERs ranging from approximately 17% (TD) to 20% (DFR) across databases and retouching algorithms, see Table IV.
If image compression is applied, the detection performance of the BSIF-based detector is significantly impacted. Focusing on JPEG compression detection performance is positively effected for some retouching methods, i.e. D-EERs decrease. As mentioned earlier, retouching methods usually apply texture smoothing to hide skin impurities. Focusing on image compression, the resulting homogeneous texture parts allow for a more efficient compression. That is, it is to be expected that retouched facial images exhibit higher visual quality compared to compressed bona fide images which likely comprise small JPEG artefacts at high frequency texture parts. Such artefacts will more likely be represented in BSIF histograms extracted from bona fide images but not in retouched images. Consequently, BSIF histograms extracted from bona fide and retouched face images are better distinguishable and the overall detection performance of the BSIF-based detector is improved. However, this might also depend on the image source, i.e. face database, as can be seen for the JPEG 2000 compression, see Fig. 6 (c) and Fig. 7 (c). Here, detection performance rates significantly decrease on the FRGCv2 database while they increase on the FERET database. If image compression is applied at higher levels (as it is the case for JPEG 2000 in our experiments), traces of retouching might vanish. Such effects can be database-specific and clearly hamper reliable detection of facial retouching. Similar effects are expected for the use of other texture descriptors, e.g. Local Binary Patterns (LBP) [45].
In contrast, the single image-based retouching detector based on ArcFace features turns out to be highly robust to image compression. Almost identical D-EERs are obtained in the presence of JPEG and JPEG 2000 compression compared to original images, see Table IV. In addition, the characteristics of corresponding DET curves are very similar, see Fig. 8 or Fig. 9. This is because the ArcFace feature extractor has been trained to extract deep face representations which are robust with respect to various variations including image compression [26]. For the task of retouching detection, this is a clear advantage over methods based on texture descriptors. Focusing on the differential, i.e. image pair-based, detection scenario, obtained results are listed in Table V. Corresponding DET curves are plotted in Fig. 10 -Fig. 13. For original images, on average slightly inferior detection performance  is obtained for the BSIF-based retouching detection method. In contrast, for the ArcFace-based detector, significantly improved detection performance is achieved in the differential scenario. This is especially the case if deep face representations of reference images considerably deviate from those extracted in the probe image which clearly applies retouching algorithms causing drastic textural or anatomical changes to face images, e.g. AirBrush or FotoRus. With respect to image compression, similar effects are observable for the BSIF-based detector in the differential scenario, see Fig. 10 and Fig. 11. JPEG compression at the considered compression level generally improves the detection performance due to the above mentioned reasons, see Fig. 10 (b) and Fig. 11 (b). Like in the single image-based scenario, such effects seem to depend on image source and the level of compression. Again, for the application of JPEG 2000 different effects with respect to detection performance can be observed, see Fig. 10 (c) and Fig. 11 (c).  Similar to the single image-based scenario, the differential ArcFace-based retouching detection systems achieves high robustness against image compression, see Table V. This is also reflected by almost identical DET curve characteristics detecting original and compressed retouched face images in Fig. 12 and Fig. 13, respectively. Again, this results from the fact that the ArcFace-based feature extractor is robust to alterations resulting from image compression since the underlying model has been trained for the task of face recognition. In contrast, the BSIF-based detector which performs a pixel-wise analysis of face images is obviously sensitive to alterations induced by image compression.

C. Discussion
In summary, the following observations are made based on the conducted experiments: • In general, the face image manipulation technique considered in this work, i.e. facial retouching, is detected more reliably if it causes drastic textural or anatomical changes. On the contrary, retouching detection becomes more challenging in case only small alterations are performed by a retouching app. For example, the YouCam Perfect app only slightly edits face images which leads to higher detection errors for all individual detection systems. However, "minor" image edits turn out to be less relevant since they are being excluded from discussed photoshop legislations [46]. • In the challenging cross-database scenario where the potentially used retouching method is unknown during training, only moderate detection performance is achieved (average D-EERs of approximately 17-20%) in the single image-based detection scenario. Note that this more realistic evaluation scenario is hardly considered in the scientific literature [5]. Significantly improved detection performance can be obtained in a differential detection scenario where a trusted but unconstrained probe image serves as an additional input to the detector. Specifically, for the use of deep face representations, D-EERs can be reduced down to approximately 12.5%. • Image compression has considerable impact on the detection performance of detectors using texture descriptors at feature extraction. The proposed BSIF-based retouching detection method appears to be sensitive to pixel variations caused by image compression. While for the considered compression algorithms and compression levels this can also lead to detection performance improvements, it is not advisable to employ such types of feature extractors for retouching detection since obtained detection scores can be misleading in the presence of image compression. Despite image compression, further image post-processings might be applied which are expected to cause similar effects. As opposed to texture descriptors, deep face representations turn out to be more suitable for retouching detection. They achieve very high robustness against the considered image compressions in both detection scenarios. Many previous studies which make use of deep learning for the task of facial retouching detection have reported performance degradation if severe image compression is applied, e.g. [9]. This leads to the assumption that used training data might not reflect variations caused by image compression. Therefore, the proposed approach of employing deep face representations which have already been trained to be robust against such alterations turns out to be more promising. Furthermore, it is reasonable to assume that retouching detection methods based on deep face representations are also robust to other image post-processings, e.g. blurring, change of image contrast, or print-scan transformations [47].

V. CONCLUSION
In this work, we investigated the effects of image compression on face image manipulation detection in a case study on facial retouching. Automated retouching detection methods employing texture descriptors and deep face representations in a single image as well as in a differential detection scenario have been proposed. For this purpose, a retouched face images have been generated based on two public available face databases using six different retouching apps. Subsequently, bona fide and retouched face images have been compressed applying JPEG and JPEG 2000 at compression levels which are of practical interest. In the challenging scenario where the potentially used retouching app is unknown, it was shown that highest detection performance is achieved for differential scenarios employing deep face representations. Additionally, the use of deep face representations turned out to be beneficial as they are highly robust to the considered compression algorithms. Moreover, obtained results revealed that retouching detection methods based on texture descriptors might be severely influenced by image compression. Interestingly, obtained results indicate that image compression can also have a positive effect on the detection performance which conflicts which findings of some previous studies. Similar effects might be observed for other face manipulations, e.g. face swapping or morphing, and corresponding detection methods. Generally, manipulation detection mechanisms are expected to be sensitive if these are not explicitly trained to be robust to image alterations caused by image compression. Moreover, further possible image post-processings, e.g. sharpening, adjustment of color histogram, are expected to cause similar effects.