skip to main content
10.1145/3613904.3642382acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open Access

Non-Consensual Synthetic Intimate Imagery: Prevalence, Attitudes, and Knowledge in 10 Countries

Published:11 May 2024Publication History

Abstract

Deepfake technologies have become ubiquitous, “democratizing” the ability to manipulate photos and videos. One popular use of deepfake technology is the creation of sexually explicit content, which can then be posted and shared widely on the internet. Drawing on a survey of over 16,000 respondents in 10 different countries, this article examines attitudes and behaviors related to “deepfake pornography” as a specific form of non-consensual synthetic intimate imagery (NSII). Our study found that deepfake pornography behaviors were considered harmful by respondents, despite nascent societal awareness. Regarding the prevalence of deepfake pornography victimization and perpetration, 2.2% of all respondents indicated personal victimization, and 1.8% all of respondents indicated perpetration behaviors. Respondents from countries with specific legislation still reported perpetration and victimization experiences, suggesting NSII laws are inadequate to deter perpetration. Approaches to prevent and reduce harms may include digital literacy education, as well as enforced platform policies, practices, and tools which better detect, prevent, and respond to NSII content.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Non-consensual synthetic intimate imagery (NSII) refers to digitally altered content that is fake but depicts the faces, bodies, and/or voices of real people. NSII can be created through more traditional means using photo-editing software to stitch together segments, add filters, or change the speed of videos — often referred to as “shallowfakes” or “cheapfakes.” Increasingly, however, NSII is being created through the use of artificial intelligence (AI), involving different methods, such as speech-to-speech voice conversion, lip-syncing, puppet-master, face synthesis, attribute manipulation, and face-swapping [83]. AI-generated NSII is more colloquially known as “deepfake pornography.” The consumer creation of deepfakes (a portmanteau of “deep learning” and “fake” [72, 81]) started in late 2017 on Reddit, after a user named “deepfakes” posted NSII depicting the faces of female celebrities “stitched” onto pornographic videos [44, 81]. Continued consumer interest in deepfakes is reflected in the proliferation of dedicated deepfake sites and forums, often depicting celebrity targets. While deepfakes can be used in beneficial ways for accessibility and creativity [19, 26], abuse potential has increased in recent years as the technology has advanced in sophistication and availability [12, 34, 53, 80]. Deepfakes can be weaponized and used for malicious purposes, including financial fraud, disinformation dissemination, cyberbullying, and sexual extortion (“sextortion”) [4, 26].

Non-consensual deepfake pornography can be considered a form of image-based sexual abuse because intimate images are created and/or shared without the consent of the person or persons depicted in the images. The harms of image-based sexual abuse have been well-documented, including negative impacts on victim-survivors’ mental health, career prospects, and willingness to engage with others both online and offline [16, 39]. The proliferation of NSII technologies means that anyone can now become a victim of image-based sexual abuse, and research suggests that women continue to bear the brunt of this online abuse [22, 44]. Investigating the prevalence of the misuse of deepfake technologies, specifically for NSII, can help Human Computer Interaction (HCI) research communities to better understand how best to mitigate the gendered societal harms associated with this phenomenon. Potential avenues for reduced harm may include comprehensive and considered legislation, NSII policies around the creation and/or distribution of content (e.g., Reddit’s continued de-platforming of subreddits dedicated to deepfake pornography or Google’s policy for removal of NSII), technical tools to help victim-survivors discover and automate takedown requests, and user experience treatments on deepfake creation tools designed to deter users from creating and distributing NSII.

Over the past decade, the terminology and language has evolved in the image-based sexual abuse space to move beyond problematic, perpetrator terms likes “revenge porn” to better encapsulate the non-consensual nature of such acts [54]. Accordingly, it is important to emphasize that deepfake pornography is, by its nature, both involuntary and non-consensual. Therefore, throughout the article, we use the broader term “non-consensual synthetic intimate imagery” (NSII) to refer to fake, digitally altered images created using AI and non-AI tools, as well as the more specific term “AI-generated image-based sexual abuse” (AI-IBSA) to refer to fake, digitally altered images created using AI. However, when reporting on our survey questions and findings, we use the more recognized “deepfake pornography” term (as well as “deepfakes”) to avoid confusion for the reader. We focus on face-swapping which involves replacing the face of the source person with the target person, so that the target person appears to engage in scenarios in which they never appeared [53].

To begin mapping the prevalence of and general sentiment around AI-IBSA, we addressed the following research questions:

RQ1: What is the general public’s awareness of, and attitudes towards, AI-IBSA?

RQ2: What is the prevalence of AI-IBSA behaviors (e.g., creating, viewing, and or/sharing images)?

RQ3: How does gender influence both RQ1 and RQ2?

In this study, we surveyed over 16,000 respondents in 10 different countries. Countries were selected based on a number of considerations, including representation of the diverse approach to legislation on the issue of image-based sexual abuse, replication/expansion of previous findings (e.g., in Australia and the United States) around the prevalence of image-based sexual abuse, and geographic diversity. Questions were designed to assess attitudes towards, and experiences of, image-based sexual abuse, with specific questions relating to NSII and more specifically AI-IBSA. The study was primarily quantitative, although optional open-ended questions allowed respondents to add additional context and thoughts. The research contributes to the existing literature by: identifying trends in geographical awareness and behaviors; building on the existing image-based sexual abuse literature in terms of understanding who is most at-risk to this phenomenon; and, probing the efficacy of current or proposed remedies.

Skip 2RELATED WORK Section

2 RELATED WORK

In this section, we provide a brief summary of the literature surrounding public perceptions of deepfakes more broadly and in relation to associated harms, deepfake detection, and legislative redress. The structure of this background review loosely follows that of our research questions; however, as prevalence data is a current gap in the literature, in our review below we also focus on whether existing or proposed laws sufficiently address the harms associated with AI-IBSA.

2.1 Attitudes Towards Deepfakes

AI-IBSA is both similar to and distinct from traditional forms of image-based sexual abuse. In introducing the “pervert’s dilemma,” Ohman [60] notes that the mere creation of deepfake pornography in isolation is somewhat comparable to private sexual fantasies, which are not considered morally objectionable. However, when situated appropriately against the backdrop of gender inequality and gender-based violence, the moral unacceptability of AI-IBSA becomes clear. Moreover, deepfake technology provides a unique avenue for harassment and harm as it significantly reduces the barriers to the creation of non-consensual content [3]. As Harris [38] notes, while some have argued that concerns about the epistemic effects of deepfakes are overblown (e.g., because content is unlikely to be taken at face value and consumers will be able to critically assess veracity claims), artificially generated associations can have long-term harmful effects (see also [67]).

Researchers have begun to explore the different perspectives on deepfakes through attitudinal research. Although this body of work is still somewhat nascent, results suggest significant concern about the discovery and propagation of deepfake videos [17, 36]. Additionally, studies have focused on the harms associated with deepfakes, including a study published in 2021 in which respondents deemed creating and posting deepfake pornography, even when labeled as fake, as highly harmful and deserving of punishment [44]. One study [29] found that while female respondents indicated greater levels of victim harm, men were more likely to create deepfake pornography. The study also found that harms are greater when content is shared, as opposed to when it is used for personal sexual gratification. To the best of our knowledge, a study with Indian respondents is the sole general population survey outside of the United States. They found that awareness of deepfakes among Indian respondents was low, and even those who knew about deepfakes expected (incorrectly) to be able to easily discern fake videos [72].

A subset of research has also been conducted with respondents who are knowledgeable about deepfakes. These studies have focused primarily on ethical or privacy concerns. One study examined Reddit users actively discussing deepfakes, which found, via content analysis, that users were generally pro-deepfake technology, regardless of the consequences [32]. In contrast, qualitative research involving individuals using an open-source deepfake creation tool found that participants had reflexively engaged with the concepts of consent and privacy, and expressed significant concerns about the misuse of deepfakes [82]. These themes were echoed and expanded upon in a mixed-methods study carried out in China [47], where users familiar with deepfakes highlighted informed consent, privacy protection, and non-deception as relevant factors influencing the social acceptance of deepfakes. That is, if deepfakes are transparent and created or distributed with the knowledge and consent of those featured, they are more acceptable than deepfakes created non-consensually and/or for deceptive purposes. In addition to these factors, attitudes are also informed by the content or aim of the deepfakes (e.g., for creative as opposed to malicious purposes). For example, two separate studies found that entertainment [47] or humorous/amusement value [17] mitigated ethical judgments on deepfakes. This may suggest that content such as sexual or parody videos could be considered more acceptable than videos intended to further political disinformation, for example.

In sum, there is some quantitative, survey-based research suggesting that average respondents have concerns about the deleterious potential of deepfakes. Similarly, existing qualitative research with participants who have expertise on deepfakes shows that they share concerns about risks and harms. More research, however, is needed to investigate geographic diversity in general attitudinal quantitative research, as well as prevalence data, to broadly understand deepfake behavior and its impacts. More work is also needed specifically on attitudes towards creating, sharing, or explicitly seeking out such content.

2.2 Deepfake Creation and Detection

While researchers and journalists have been documenting the phenomenon of deepfakes for several years, interest has accelerated in recent years due to technological advancements that facilitate the creation of deepfakes in affordable and low-friction ways [9]. For example, “deepfake pornography,” as a keyword search within Google Scholar, returns 284 results from 2017-2019 as compared to 1,480 results from 2022-2023. Much of the research in this space [33] focuses on documenting how this technology works or highlighting the performance issues for various detection tools or methods [21, 53, 64, 66, 74]. Like other broader AI research focusing on the issues associated with biased training data sets, deepfake detection research has identified diversity and bias problems within relevant datasets [78, 84]. For example, detection techniques have been shown to be sensitive to gender, performing worse on female-based deepfakes [57] as compared to images of males. A recent systematic review of deepfake detection papers succinctly concludes that current models are sensitive to both novel or challenging conditions [74]. This is particularly relevant, as non-digitally altered image-based sexual abuse material may be captured surreptitiously, or made with poor cameras and/or bad lighting conditions. AI-IBSA is thus likely to demonstrate the exact risk factors for poor detection rates.

In addition to technical tool failures, studies have documented the fallibility of humans in correctly identifying deepfakes as fake [42, 43, 77]. This is seen across a variety deepfake types, from deepfake audio [51] to photos [11, 58] and videos [37]. It is possible that as deepfakes become more ubiquitous, humans will improve in identifying markers or other signals suggesting the content is not “real.” However from a technological detection perspective, it seems likely that an arms race of sorts will develop, such that increasingly sophisticated deepfakes will require significant improvements in detection methods to catch up and keep pace.

2.3 Deepfakes and the Law

The societal implications of deepfakes technology are far-reaching and, while many have been recognized or anticipated, experts generally agree current legislative remedies for misuse are unlikely to be effective [15, 20, 34, 52, 73]. Although many scholars call for legislation, there are others who are skeptical of criminalizing NSII, AI-IBSA, or deepfakes more broadly [27, 41, 82]. Insufficient NSII laws should be considered the norm, rather than the exception, when considering the history of failed legislative proposals for image-based sexual abuse in general [3, 13, 79]. In the legal sphere, the first American federal law regarding deepfakes was signed in 2019. This law focuses on the potential of deepfakes to be weaponized by foreign powers to influence elections, and it also establishes a competition to encourage research into detection technologies [28]. Four states (California, Virginia, New York, and Georgia) have also passed laws criminalizing “deepfake pornography” specifically [44, 52, 69]. At the end of 2022, a federal bill was introduced in the U.S. House of Representatives specifically prohibiting deepfake pornography [55]. Although legislative remedies have approached political deepfakes and AI-IBSA separately, one critique from scholars is that both operate in similar ways to chill critical speech [49].

Outside of the United States, several countries have either explicitly included NSII in existing laws or are considering making legislative amendments. For example, in Australia, some states and territories make it clear that the non-consensual distribution of intimate images includes digitally altered material as a criminal offense. The Australian Online Safety Act 2021 [59] also provides a civil remedy scheme for a range of online harms, including the non-consensual sharing of digitally altered images. South Korea specifically prohibits NSII, alongside all other forms of pornography [46]. And in England and Wales, the UK Online Safety Act 2023 criminalizes threatening or sharing intimate images without consent, including images that have been digitally altered [24, 45]. In most countries, however, and including most of those surveyed in our study (Belgium, Denmark, France, Mexico, Netherlands, Poland, Spain), there are no protections at all.

Skip 3METHODS Section

3 METHODS

3.1 Ethical Considerations

In mid-2023, we conducted a multi-country online survey on image-based sexual abuse, defined as the non-consensual taking, creating, or sharing of intimate images (photos or videos), including threats to share intimate images. The study procedure and survey were approved by the Human Research Ethics Committee at the Royal Melbourne Institute of Technology. Respondents were informed of the topic at the beginning of the survey and were allowed to opt out at any time. All questions involving self-reporting of victimization and/or perpetration were voluntary and respondents could choose an answer of “prefer not to say” while still completing the survey. Respondents were given support service information tailored to their geographic location at the end of the survey. Respondents were compensated based on pilot tests of time taken in each country and exchange rates.

3.2 Study Design

We conducted the online survey in 10 countries, employing a research and survey firm (YouGov) to survey a minimum of 1,600 adults (18+) in each country, for a total respondent count of 16,693. YouGov was selected due to their comprehensive country coverage and large panels [70], which reduces the likelihood of needing extensive within-country weighting. YouGov retains demographic information on their survey panelists, and screener questions were used to achieve representative sampling based on quotas. Upon qualifying for the survey, completion rates ranged from 71.0% (Belgium) to 89% (Australia). The final dataset consisted of respondents who passed the YouGov standard quality checks, including bot catchers, speeder detection, and manual checking of open-text questions.

In conducting our survey, we intentionally chose a variety of countries that represent the spectrum of how legislation has dealt with NSII, image-based abuse, and pornography more broadly. The goal was to obtain representative samples by age, gender, and location (e.g., state or territory in Australia, province in South Korea, etc.) within each country, using the latest official population estimates reflecting each surveyed country. The survey built on an earlier survey conducted in Australia [62], which focused on non-consensual intimate imagery more broadly (but did not specifically ask about deepfakes). The revised survey (median completion time: 18.2 minutes) added some additional scales, teased apart how content was obtained (e.g., filmed/photographed versus stolen from a device/cloud), and included questions on NSII and “deepfake pornography” specifically.

In this paper, we focus on the subset of 23 questions in our survey relating to AI-generated image-based sexual abuse (AI-IBSA). To administer the survey in the official language(s) of each country, the survey vendor provided translations into Danish, Dutch, French, Polish, Spanish, and South Korean. These were then double-checked and edited as needed by native speaker colleagues of the research team for quality assurance. We aimed to have as similar language as possible across countries, while also accounting for cultural differences in comfort speaking about sexual topics.

The survey was deployed from May to June 2023. In some countries, samples were not fully representative. Accordingly, when presenting disaggregated findings in the quantitative section, the data are weighted by age, gender, and location. When we aggregate across countries, the data are additionally weighted by population, rounded to the nearest 1,000. Statistics presented in the qualitative section are unweighted.

The mean respondent age was 46.0 years (sd = 16.7). Women accounted for 50.9% of the respondents, men for 47.6%, and together “other,” “prefer not to say,” and “non-binary” accounted for the remaining 1.4%. Detailed demographic breakdowns by country are available in Table A1 in the Appendix. Our gender results and statistical tests focus primarily on the users who identified as women or men, excluding the 1.0% of other users for two reasons: we are using census weights, which are often limited to binary genders, and our total sample (unweighted) of non-binary respondents was 93, which limits our ability to perform statistical tests for significance.

3.3 Measures and Analysis

All quantitative analyses were performed using R Statistical Software v4.3.2 [65]. We primarily used the survey [48], srvyr [31], marginaleffects [6], and epitools [5] packages for statistical analyses. When calculating within-subject mean differences, we use two-tailed one-sample weighted t-tests with bootstrapped standard errors, using 1,000 replicates (e.g., differences in criminalization attitudes across different types of behaviors). When calculating confidence intervals for proportions, we use the Rao-Scott scaled chi-squared distribution for the loglikelihood from a binomial distribution (e.g., % of respondents who had experienced victimization). When calculating gender differences for binary categorical values, Pearson’s Chi-squared tests with Rao-Scott adjustments are used to determine significance, followed by the svyglm function with a quasibinomial family to calculate risk ratios (e.g., were men more likely to report having watched deepfake pornography as compared to women, and how much more likely?). When calculating between-gender mean differences, we use two-tailed two-sample weighted Welch t-tests using bootstrapped standard errors, and 1,000 replicates (e.g., difference in how much women think sharing deepfake pornography of celebrities should be criminalized as compared to men).

When evaluating differences amongst categorical variables in the unweighted qualitative data, we use Pearson’s Chi-squared test with Yates’ continuity correction (e.g., was there a significant difference the proportion of men versus women in indicating condemnation attitudes). When examining victim-blaming attitudes by country, we use a Fisher’s exact test to accommodate the small sample size.

In relation to AI-IBSA, all respondents were asked three closed-ended questions, as well as one optional open-ended question presented towards the end of the survey. They were first asked whether they were familiar with the concept of deepfake pornography using the following wording: “Realistic-looking fake porn can now be created using artificial intelligence (AI) to swap the faces of pornographic actors with other people’s faces, so that it looks like they’re in a porn video. This is sometimes referred to as ‘deepfake pornography’ as it looks very realistic and can be hard to recognize as fake or digitally created. How familiar are you with this concept?”. This question offered four possible categorical response options, as seen in Table 1. Second, respondents were asked how much they thought a series of behaviors were worthy of criminalization (behaviors can be seen in Figure 1). Response options ranged from -2 (“Definitely should not be a crime”) to 2 (“Definitely should be a crime”), with the midpoint of 0 being “Not sure.” And third, respondents were asked how much they agreed or disagreed with the following statement: “People shouldn’t get upset if someone creates a digitally altered video (e.g., fake porn or a ‘deepfake’) of them without their permission.” Respondents answered this question on a 7-point Likert scale, anchored at 1=“Strongly disagree” and 7=“Strongly agree” (we present the mean alongside the median in the results - see below) [76].

The remaining close-ended questions were asked of a subset of respondents, depending on their knowledge and experiences. A subset of respondents who indicated at least some familiarity (i.e., “I know a little bit about it” or “I am quite familiar with this”) were asked follow-up questions around AI-IBSA behaviors as seen in Figure 2. As part of a broader set of questions regarding the victimization and perpetration of image-based sexual abuse since the age of 18, we also asked the subset of respondents who indicated a previous experience with digitally altered images whether the relevant content was a video, and if so, whether the video was a deepfake (allowing respondents to select “Yes”, “No”, “Don’t know”, or “Prefer not to answer”). For victimization questions, the wording was intentionally not blame-casting. Similarly, perpetration questions were phrased neutrally to focus on whether the respondent had ever done an action.

All respondents could opt into answering the open-ended question with the following wording: “Would you like to add any additional thoughts on the topic of deepfakes? Please feel free to skip this question if you don’t have anything to add.” We analyzed the qualitative data for the open-ended response questions using an iterative, inductive approach. Our approach was to translate the responses from origin language to English via the Google Translate API, familiarize ourselves with the data, develop a codebook, code the data, and then review the data to make sense of the themes from the first coding process. To do so, two members of the research team read through 25% of the sample responses across the countries and then co-developed a codebook that detailed descriptions and restrictions on what could be included within a code and provided concrete examples of each code. The same two members of the research team coded the first 150 responses based on four preliminary codes that emerged from the initial review of the data. Coder 1 and coder 2 independently moved through the subset of data labelling the open-ended responses with relevant codes. Following this, coders 1 and 2 compared coding allocations for quality assurance. The two coders then identified and removed responses that were deemed inappropriate or irrelevant to the research (e.g., “no comment,” or “no thanks, I do not know this topic”). Finally, the team revised and discussed the themes that emerged from the responses (n = 2,474). The final codebook contained four key themes: “condemnation,” “punitive and justice attitudes,” “a fear of technology,” and “victim-blaming.” Inter-rater reliability (IRR) for each code was calculated using Cohen’s Kappa. Across the four codes, Kappa values ranged from.135 to.225, with a median of.173.

Skip 4RESULTS Section

4 RESULTS

Results are organized into four sections. The first section addresses the first research question, focusing on awareness and attitude questions that were asked of all respondents. The second section answers the second question by focusing on behaviors involving AI-IBSA/deepfake pornography, including victimization, perpetration, and consumption. The third section investigates the role of gender in influencing answers to the first and second research questions. Finally, the last section is a qualitative analysis of the open-ended opinion question. We present our results primarily descriptively and holistically, particularly at the country level, as we have no reason to assume differences between countries, given the lack of prior data. When discussing gender differences, statistical analyses (accounting for weights) are presented, alongside measures of uncertainty. We conclude with some qualitative trends from the single, open-end question to which respondents could choose to respond.

Table 1:
CountryResponse% of95% CI
Respondents
AustraliaI have never heard of this until now36.835.4-38.2
I have only heard the phrase30.329-31.7
I know a little bit about it24.723.4-26
I am quite familiar with this8.27.3-9.1
BelgiumI have never heard of this until now40.139.1-41.1
I have only heard the phrase28.027.1-28.9
I know a little bit about it25.624.7-26.5
I am quite familiar with this6.35.8-6.8
DenmarkI have never heard of this until now34.834.1-35.5
I have only heard the phrase44.143.4-44.8
I know a little bit about it17.016.5-17.6
I am quite familiar with this4.103.8-4.4
FranceI have never heard of this until now55.252.8-57.6
I have only heard the phrase21.619.7-23.6
I know a little bit about it18.416.6-20.3
I am quite familiar with this4.83.8-5.9
MexicoI have never heard of this until now51.748.8-54.7
I have only heard the phrase22.319.9-24.8
I know a little bit about it21.018.7-23.5
I am quite familiar with this4.93.8-6.3
NetherlandsI have never heard of this until now29.027.9-30.1
I have only heard the phrase31.730.6-32.8
I know a little bit about it31.830.7-32.9
I am quite familiar with this7.56.9-8.2
PolandI have never heard of this until now50.949.1-52.8
I have only heard the phrase30.128.4-31.8
I know a little bit about it15.213.9-16.5
I am quite familiar with this3.83.2-4.6
South KoreaI have never heard of this until now31.129.1-33.2
I have only heard the phrase39.237.1-41.4
I know a little bit about it26.624.7-28.6
I am quite familiar with this3.02.3-3.8
SpainI have never heard of this until now49.547.5-51.5
I have only heard the phrase27.625.8-29.4
I know a little bit about it19.517.9-21.1
I am quite familiar with this3.52.8-4.3
USAI have never heard of this until now42.537.7-47.4
I have only heard the phrase26.922.7-31.4
I know a little bit about it23.719.6-28.1
I am quite familiar with this6.94.5-9.8

Table 1: Respondent Familiarity with Deepfakes, by Country (weighted).

4.1 Awareness and Attitudes

On average, 28.1% (95% CI \(= 27.0\%-29.3\%\)) of respondents reported being either quite familiar with, or knowing at least a little bit about, deepfake pornography. Descriptive statistics including 95% CIs broken out by country are available in Table 1. The percentage of respondents indicating high familiarity ranged from 3.0% in South Korea to 8.2% in Australia.

Figure 1:

Figure 1: Heatmap of Mean Criminalization Attitudes by Country. Scale ranges from -2 (“Definitely should not be a crime”) to 2 (“Definitely should be a crime”), with a midpoint of 0 “Not sure”). Brighter red is more deserving of criminalization, darker blue is less deserving of criminalization. Standard errors are presented in parentheses

Because all respondents in the survey received some information about deepfake pornography through the phrasing of the awareness question, respondents were then asked whether they thought a series of actions should be a crime, with response options ranging from -2=“Definitely should not be a crime” to 2=“Definitely should be a crime.” The midpoint of 0 was a “Not Sure” option. Weighted means [76] and standard errors by country and behavior are presented in Figure 1 (with brighter red indicating more deserving of criminalization and darker blue less deserving), and weighted medians can be found in Figure A3 in the Appendix. We intentionally included “traditional” forms of image-based sexual abuse (e.g., the taking, creating, threatening to share, or sharing of real non-consensual content) to better understand how users perceived AI-IBSA/deepfake pornography in comparison. Aggregated across countries, where equivalencies existed, respondents generally rated actions involving non-consensual real content to be worse than the equivalent of AI-generated content. For example, posting/sending of real content without consent was seen as significantly more deserving of criminalization than sharing deepfake pornography of ordinary people (Mdifference = 0.27, t = 38.76, SE = 0.01, p < 0.001, Cohen’s d = 0.30). Non-consensually filming or photographing someone was seen as significantly more deserving of criminalization as compared to making deepfake pornography (“ordinary people” for both, Mdifference = 0.56, t = 63.51, SE = 0.01, p < 0.001, Cohen’s d = 0.50).

We additionally disaggregated the deepfake pornography categories by victim-type; that is, celebrities versus non-celebrities. We hypothesized that respondents would have more lenient views towards deepfake pornography behaviors involving celebrities, who are more commonly victimized than non-famous individuals. The difference between paying someone to create deepfake pornography of a celebrity versus an ordinary person was statistically significant (Mdifference = 0.03, t = 5.01, p < 0.001, SE = 0.01, Cohen’s d = 0.04), however the effect size is negligible. Similar findings can be seen for viewing deepfake pornography of ordinary people versus celebrities (Mdifference = 0.03, t = 4.63, p < 0.001, SE = 0.01, Cohen’s d = 0.03). This suggests little difference in perceptions of criminalization deservedness between celebrity and non-celebrity victims of equivalent behaviors.

With regard to deepfake pornography behaviors, we can draw four main conclusions. First, echoing the real content findings, the distribution of deepfake pornography was considered especially bad, as compared to something more passive such as viewing deepfake pornography (“ordinary people” for both, Mdifference = 0.68, t = 75.25, p < 0.001, SE = 0.01, Cohen’s d = 0.60). Second, paying someone to create deepfake pornography was considered worse than making it yourself (“ordinary people” for both, Mdifference = 0.33, t = 46.45, p < 0.001, SE = 0.01, Cohen’s d = 0.36). One possible interpretation is that respondents assume that going to significant lengths to have deepfake pornography created is indicative of malicious intent. Third, across countries, the behavior with the lowest average score (i.e., least bad) was the viewing of deepfake content.

Finally, respondents were asked about how much they agreed or disagreed with the following statement: “People shouldn’t get upset if someone creates a digitally altered video (e.g., fake porn or a ‘deepfake’) of them without their permission,” with anchors of 1=Strongly disagree and 7=Strongly agree. As seen in Table 2, across the different countries, the mean response fell between 1 and 2, with a median of 1 in all countries, indicating a general sentiment that people should get upset if they are the target of a deepfake pornography video.

Table 2:
CountryMeanSEMedian
Australia1.80.041.0
Belgium1.50.031.0
Denmark1.40.031.0
France1.70.041.0
Mexico1.50.031.0
Netherlands1.50.031.0
Poland1.60.041.0
South Korea1.80.041.0
Spain1.40.031.0
USA1.80.041.0

Table 2: Level of agreement with statement: “People shouldn’t get upset if someone creates a digitally altered video (e.g., fake porn or a ‘deepfake’) of them without their permission” (1 = strongly disagree, 4 = neither agree not disagree, 7=strongly agree). Weighted statistics presented.

4.2 Behaviors

Behaviors were asked about in three different ways. First, respondents who reported that they were quite familiar with the term “deepfakes” or knew a little bit about it (28.1%, 95% CI \(= 27.0\%-29.3\%\)) were asked a follow-up question about whether they had engaged in specific deepfake pornography behaviors. Second, as part of the larger survey, respondents were asked about general image-based sexual abuse victimization and perpetration. If they reported experiences with digitally altered content, they were asked follow-up questions about whether the digitally altered content involved video(s), and if so, if the video(s) was a deepfake. This is distinct from the first question in that it a) asks for more details of the perpetration/victimization (e.g., whether they had created, threatened to share, or shared such content), and b) asks respondents who reported victimization to clarify whether they had been the victim of deepfake pornography specifically.

4.2.1 Behaviors of “Familiar” Respondents.

The results of the first question (measuring perpetration behaviors) can be seen in Figure 2, where the reported data represents the percentage of all respondents. The respondents who did not see this question due to a lack of familiarity with deepfake pornography are represented in the missing data, which would make each bar sum to 100% if included. For example, 70% of respondents in Spain indicated they had never heard of this until now or they had only heard the phrase, so they were not asked this question. The most common behavior was the viewing of celebrity deepfake pornography, ranging from 2.8% (95% CI \(= 2.1\%-3.7\%\)) of all respondents in Poland, to 8.0% (95% CI \(=6.6\%-9.5\%\)) of all respondents in Australia. Aggregated across the different countries, 6.2% (95% CI \(=5.6\%-6.9\%\)) of all respondents indicated that they had viewed celebrity deepfake pornography. The second most common behavior (2.9% of all respondents, 95% CI \(= 2.5\%-3.3\%\)) was the viewing of deepfake pornography involving “ordinary” people, ranging from 0.7% (95% CI \(= 0.4\%-1.3\%\)) in Denmark, to 5.1% (95% CI \(= 4.1\%-6.2\%\)) in Mexico. Behaviors that required more time, skills, or resources were rarer, such as creating or paying someone to create deepfake pornography of ordinary people (0.6%, 95% CI \(=0.4\%-0.9\%\), and 0.5%, 95% CI \(= 0.3\%-0.7\%\)), respectively, aggregated across countries)).

Figure 2:

Figure 2: Deepfake Pornography Behaviors by Country. Percentage of all respondents who indicated engaging, or not, in a particular behavior. 95% confidence interval bars presented in grey.

4.3 Respondents who Reported Personal Victimization or Perpetration

4.3.1 Victimization.

Experiences broken out by country are available in Figures 3 and 4. Aggregated across behaviors, respondents from Australia (3.7%), Mexico (2.9%), and South Korea (3.1%) reported the highest rates of victimization by deepfake pornography. Full results with 95% confidence intervals can be seen in Table A2 in the Appendix.

Aggregated across countries, 2.2% (95% CI \(=1.8\%-2.6\%\)) of respondents reported some form of victimization. This included: 1.2% (95% CI \(=0.9\%-1.5\%\)) of respondents who reported that someone had created deepfake pornography content of them; 1.3% (95% CI \(=1.0\%-1.6\%\)) who reported that someone had posted or sent deepfake pornography content of them; and 1.2% (95% CI \(=0.9\%-1.5\%\)) who reported that someone had threatened to post or send deepfake pornography content of them.

Figure 3:

Figure 3: Percentage of Respondents Reporting Different Types of Deepfake Pornography Video Victimization, by Country. 95% CIs in grey.

4.3.2 Perpetration.

Aggregating across behavior types (creating, posting, threatening to post), self-reported deepfake pornography perpetration was rare (1.8%, 95% CI \(=1.4\%-2.2\%\)), with the highest rates reported in Australia (2.4%) and the United States (2.6%), and lowest in Belgium (\(0.3\%\)). Full results can be seen in Table A2 in the Appendix. Aggregating across countries, 1.0% (95% CI \(=0.8\%-1.4\%\)) of respondents indicated creating deepfake pornography, 1.0% (95% CI \(=0.8\%-1.3\%\)) reported threatening to post, send, or share deepfake pornography, and 0.7% (95% CI \(=0.5\%-0.9\%\)) reported actually posting, sending, or sharing deepfake pornography content. We included the creation of deepfake pornography as a perpetration behavior to mirror the fact that “creation” behaviors, such as filming or photographing real content without consent, fall under the umbrella of image-based sexual abuse perpetration. Nevertheless, we acknowledge that such behaviors might not be considered “perpetration” by others.

Figure 4:

Figure 4: Percentage of Respondents Reporting Different Types of Deepfake Pornography Video Perpetration, by Country. 95% CIs in grey.

4.4 Gender

4.4.1 Awareness and Attitudes.

Given some existing literature on gendered attitudes and behaviors towards image-based sexual abuse more broadly, we wanted to assess whether gender played a role in respondents’ answers. There was more reported familiarity of deepfake pornography by respondents who identified as men, as opposed to those who identified as women. A Pearson’s chi-squared test with Rao & Scott adjustment confirmed the significant relationship between NSII awareness and gender (F = 58.7, ndf = 3.0, ddf = 49287.1, p < 0.001). Men were 1.92 times (95% CI = 1.53 − 2.40) more likely than women to report being “quite familiar with this” and were significantly less likely to report having never heard of deepfake pornography until now (RR = 0.69, 95% CI = 0.65 − 0.73). Full details can be seen in Figure A1 of the Appendix.

Across deepfake behaviors, men rated criminalization worthiness significantly lower than women, with effect sizes falling between small and medium. For example, women found viewing deepfake pornography of celebrities (Mdifference = 0.58, t = 29.6, SE = 0.02, Cohen’s d = 0.41, p < 0.001) and viewing deepfake pornography of non-celebrities (Mdifference = 0.57, t = 29.2, SE =.02, Cohen’s d = 0.40, p < 0.001) to be more deserving of criminalization as compared to men, as seen in Figure 5 (with brighter red indicating more deserving of criminalization and darker blue less deserving). The smallest difference and effect size was seen in how each gender perceived paying someone to create deepfake pornography of an ordinary person (Mdifference = 0.37, t = 22.05, SE =.02, Cohen’s d = 0.28, p < 0.001).

Figure 5:

Figure 5: Heatmap of Mean Criminalization Attitudes by Respondent Gender. Scale ranges from -2 (“Definitely should not be a crime”) to 2 (“Definitely should be a crime”), with a midpoint of 0 (“Not sure”). Brighter red is more deserving of criminalization, darker blue is less deserving of criminalization. Standard errors are presented in parentheses.

Similarly, the mean agreement with the statement that people shouldn’t be upset to be the target of deepfakes was higher in men than in women, albeit with a small effect size (Mdifference = 0.34, t = 14.4, SE =.02, Cohen’s d = 0.24, p < 0.001).

4.4.2 Behaviors.

Of the two most commonly reported behaviors, men respondents were more likely to report viewing deepfake pornography; both that of celebrities (F = 71.1, ndf = 1, ddf = 16510, p < 0.001, RR = 3.5, 95% CI = 2.77 − 4.42) and non-celebrities (F = 14.3, ndf = 1, ddf = 16510, p < 0.001, RR = 2.66, 95% CI = 1.91 − 3.70).

Figure 6:

Figure 6: Deepfake Pornography Behaviors by Gender. Percentage of all respondents who indicated engaging, or not, in a particular behavior. 95% confidence interval bars presented in grey.

4.4.3 Respondents Who Reported Personal Victimization or Perpetration.

Looking at the effects of gender on victimization and perpetration, men reported significantly higher rates across all victimization and perpetration experiences related to creation and threats. For example, the risk of men reporting having been threatened with deepfake dissemination was 2.6 times that of women, and the risk of them threatening someone else was 1.9 times that of women. Full results are presented in Table 3. Notably, there were no significant differences by gender in the risk of disseminating content or having their content disseminated.

Table 3:
TypeExperienceProportion ofProportion ofp-valueRR95% CI
“yes”: men“yes”: women
Perpetration
Creating0.0150.006<0.0052.311.29-4.15
Threatening0.0150.006<0.012.641.40-4.98
Posting/Sharing0.0090.0050.161.630.82-3.25
Victimization
Created0.0160.007<0.0012.2301.39-3.58
Threatened0.0160.008<0.0011.911.18-3.10
Shared0.0150.0100.0931.520.93-2.46

Table 3: Experiences of NSII by Gender, with Risk Ratios.

4.5 Qualitative Responses

In the survey, respondents were asked an optional, open-ended question: “Would you like to add any additional thoughts on the topic of ‘deepfakes’? Please feel free to skip this question if you don’t have anything to add.” A total of 2,477 open-ended responses were collected. Of those responses, 1,180 were removed due to inappropriate or irrelevant responses. This left us with 1,296 open-ended responses (7.8% of respondents). No respondents discussed their own experiences of either victimization or perpetration.

The most prevalent theme was the condemnation of deepfake pornography, with just over 43% of the respondents who responded to this question expressing that view. Across responses, women (47.9%) expressed greater condemnation of deepfake pornography than men (38.2%, x2 = 11.58, df = 1, p <.001). There were significant differences by country in terms of frequency of condemnation sentiment (x2 = 42.11, df = 9, p < 0.001). Comments relating to the condemnation of deepfake pornography were most common from respondents from France (60.8%), followed by Denmark (53.7%) and the United States (48.2%). Comments relating to this theme were less common in South Korea (25.7%), followed by the Netherlands (27.8%), and Belgium (35.0%). Respondents expressed their condemnation of deepfake pornography, calling the act “disgusting and perverted” and “an extremely sick concept.” Some respondents mentioned the potential pervasive impacts of deepfake pornography, such as one Australian women (aged 35-44) who said, “I think it’s dangerous as it could lead to serious injury (mental and emotional etc.) to the innocent person on the receiving end of the deepfake.” Others commented on the damage it can have on a victim’s employment opportunities or integrity. For example, a man from Mexico (aged 18-24) commented that “it is a stupid strategy to continue damaging the integrity of people who do not want to be intimately seen by the public.” There was also a general consensus among respondents that deepfakes were unethical and morally wrong.

The second most prevalent theme, found in just over 30% of all responses to the open-ended question, referred to a fear of technology; specifically deepfakes and artificial intelligence (AI). There were significant differences by country (x2 = 32.81, df = 9, p < 0.001), but not by gender (x2 = 0.011, df = 1, p = 0.73). These responses were the highest in the Netherlands (50.0%), and the lowest in France (16.2%) and South Korea (19.7%). References to the futuristic nature of deepfake technology, fear of its capabilities and accessibility, as well as the realness of deepfaked images, showed up across these responses. For example, one man from Mexico (aged 25-34) commented on how “it is difficult to assimilate [to] these types of tools, before they seemed futuristic but they are already very normal.” Many respondents described deepfakes as a troubling and terrifying advancement of AI technology. Some, particularly women, indicated their concern about being victimized by deepfake pornography. For example, a woman from Denmark (aged 25-34) referred to deepfakes as “widely scary” because “it’s hard not to think if you have become a victim yourself without knowing it.” Other respondents focused on the continued growth of deepfakes and the implications they might have for future generations. For example, an Australian woman (aged 45-54) described how she “fear[ed] for [her] daughter’s future.” Comments on the realness of deepfakes was a common thread throughout responses, with a Polish man (aged 35-44) saying “soon it will not be known if what we [are] seeing is reality or not.” Similarly, a Belgian woman (aged 45-54) questioned “can we still know if the photo or video seen or received is real or not?! It becomes difficult to make your own judgement.” Respondents also spoke to the difficulties of regulating deepfakes online. A man from Mexico (aged 65+) emphasized how it is “almost impossible to control,” while a woman from the United States (aged 65+) asked “how can one protect oneself from this? Or prove it’s not them?”

The third most common theme was punitive and justice attitudes, seen in just over a quarter of respondents. Across responses, men were slightly more likely to express punitive and justice attitudes (X2 = 4.85, df = 1, p < 0.05), and there were significant differences by country (X2 = 91.18, df = 9, p < 0.001). References to punishment, injustice, criminalization, and governmental responsibility were common in South Korea. Just over half (55.3%) of South Korean respondents demanded strengthened laws relating to deepfakes. Responses often referred to deepfake pornography as a “sex crime” and called for the implementation of strong criminal justice regulations and sanctions. Accordingly, many South Korean respondents argued for the development of criminal sanctions that imposed “strong punishment” for perpetrators of deepfake pornography. As one South Korean respondent (male, aged 45-54) stated, “deepfakes [are] not yet controversial, but it can cause many social problems. In reality, legal regulations are needed to strongly punish [these] sex crimes.”

Many respondents disclosed that they were unaware if penalties and sanctions were available in their country for deepfakes, while others were concerned with the slow pace of governmental responses. One woman from the United States (aged 55-64) stated that she “feared that politics doesn’t attract enough experts in tech who could help shape policy and procedure on deepfakes … and it needs to be something for which governments set as a goal to get ahead of and not ignore [or] play catch up.” Similarly, a woman from Denmark (aged 25-34) emphasized the need to “hurry to regulate how people are taking advantage of AI.” References to internet regulation and platform responsibility were found across some responses. For instance, one Australian woman (aged 65+) commented that “platforms need to take more responsibility for content,” and a man from the Netherlands (aged 65+) recommended that this type of content should be “immediately removed by relevant platforms and should be punishable for both maker and distributor.”

Finally, only about 4% of responses indicated victim-blaming attitudes. These respondents noted that people should be more careful with what they post on the internet, and that they should refrain from taking or sharing nude photos or videos, and stop complaining about the issue and take responsibility. There were no significant differences by gender (X2 = 1.68, df = 1, p = 0.20), nor by country (per a Fisher’s exact test for count data, p = 0.08). For example, a non-binary respondent from Mexico (aged 35-44) and a woman from Poland (aged 55-64) noted that people have to be careful with what they share on the internet. Two Australian respondents encouraged people to stop taking nude images altogether; for example, one Australian man (aged 55-64) commented “if you don’t want something shared on the internet don’t take nude photos” while an Australian woman (aged 65+) said, “that’s why I do NOT send photos over the internet at all. I can’t understand why people send photos to anyone — keep your [images] private.” One man from Poland (aged 65+) was more explicit and indicated that people should not complain about being victimized: “you just should not share any photos on the web at all, and if you make it available, you should take into account the consequences and then regret pretending to be a fool and even worse to blame anyone.”

In summary, only 7.8% of all respondents provided relevant answers to the optional, open-ended question. This means that the majority of survey respondents did not provide additional thoughts on the topic - either because they did not know enough about deepfake pornography or for other reasons.

Skip 5DISCUSSION Section

5 DISCUSSION

Our study generated four main findings about AI-IBSA/deepfake pornography awareness, behaviors, and gender dynamics. First, despite widespread press coverage, particularly in Western countries, the concept of deepfake pornography is not well-known. Yet on balance, when informed about the concept, respondents thought behaviors associated with non-consensual fake intimate imagery should be criminalized. Second, self-reported victimization prevalence was relatively low, as compared to more traditional image-based sexual abuse prevalence rates. Third, the most common behaviors self-reported by respondents were passive and readily available (e.g., the consumption of celebrity deepfake pornography), with behaviors requiring effort and/or money being very rare (e.g., creating deepfake pornography). Finally, compared to women, men think of these behaviors as less bad, and are also more likely to report being both victimized by and perpetrating AI-IBSA.

5.1 Attitudes and Awareness

Less than 30% of all respondents indicated some level of familiarity with the concept of deepfake pornography. This was also arguably reflected in the qualitative responses, with only 7.8% of all respondents answering the open-ended question. Presumably, as the technology becomes more widespread and ever more available, familiarity will increase. This may be a double-edged sword; it is likely to result in increased use of AI-IBSA as an abuse vector (both for purposes of humiliation, and as a tool for sexual extortion), but may also result in increased suspicion of the veracity of explicit content. One positive finding was general agreement across countries that perpetration behaviors associated with “traditional” image-based sexual abuse are deserving of criminalization. Research has widely found some degree of victim-blaming reflected in attitudes towards traditional non-consensual intimate imagery [30, 56, 75]. Attitudes were slightly more lenient with regard to deepfake behaviors, although over time attitudes may evolve with increased familiarity. For instance, the realization that any private individual can be deepfaked, requiring no actions on the part of the target, may engender more empathy and less victim-blaming attitudes. It should be noted that few respondents in the qualitative part of the survey expressed victim-blaming attitudes. This can be compared to other more commonly expressed pro-social attitudes in the qualitative responses, including those expressing condemnation or support for punitive action.

5.2 Behaviors

The vast majority of respondents indicated no engagement in any type of AI-IBSA behaviors, with the most popular behavior being the consumption of celebrity deepfake pornography. It was even rarer for respondents to indicate perpetration themselves. This was similar for victimization, with few respondents indicating that they had been victimized by deepfake pornography. However, the victimization statistics in particular should be interpreted with caution. Even with traditional forms of image-based sexual abuse, many victims may be unaware of their victimhood. AI-IBSA may exacerbate this dynamic given that deepfake technology “democratizes” victimization and image-based sexual abuse by enabling anyone with a computer to be a perpetrator, and anyone with imagery online to be targeted. Also, there are additional unique motivations for AI-IBSA that may result in less likelihood of victim notification; for instance, if the motivations for the creation or distribution of AI-IBSA are for sexual gratification purposes and/or for demonstrating or practicing digital skills in AI. Finally, as content may be distributed on dedicated sites or within closed communities on sites such as Discord or Reddit, victims of AI-IBSA may not know about or find out about their images.

As a subset of image-based sexual abuse, it is useful to compare rates of AI-IBSA with previously reported statistics on image-based sexual abuse more broadly. There are several associated challenges, including sampling methodologies and definitions. For example, studies may define image-based sexual abuse behaviors differently, failing to include the non-consensual creation or filming of content, or threatening to share content, alongside the actual non-consensual sharing of content. Studies may also intentionally survey younger populations who may be more likely to have relevant experiences, as opposed to collecting population-level prevalence data. Notwithstanding those limitations, previous work has shown that image-based abuse prevalence is increasing over time, driven by changes in technology use, dating, sex, and privacy [63]. Powell et al. [63] found on average, across Australia, New Zealand, and the UK, that  1/3 of survey respondents indicated at least one experience of one or more forms of image-based sexual abuse, defined similarly to our study. An American-based study [68] focused exclusively on the non-consensual sharing of content, and found that 1/12 respondents indicated victimization, while 1/20 reported perpetration. Even taking the more expansive definition of image-based sexual abuse, our rates of perpetration and victimization using deepfake technology were quite low. Given the trends in traditional image-based sexual abuse [63], and the increasing ease of NSII creation, we expect these rates to increase over time.

5.3 Gender and AI-IBSA

Awareness of deepfakes was higher among respondents who identified as men as compared to women. This is consistent with the literature on the “digital divide” which suggests that women face disproportionate barriers to access and use to the internet compared to men. Even when women have access to the internet and technology, they may not have the time, resources, skills, or knowledge to leverage it [1]. Additionally, the attitudinal questions indicate a trend towards men diminishing the harms or “badness” of deepfake pornography behaviors. This is consistent with previous findings on deepfake pornography [29], and with other findings around image-based sexual abuse more broadly [7, 10].

Men were more likely to report both perpetration and victimization by deepfake pornography, across subtypes. Studies continue to report mixed findings regarding the role of gender in image-based sexual abuse. Powell et al. [63] reported higher rates of perpetration in men, and similar rates of victimization for men and women, but differential impacts. Our findings are somewhat inconsistent with a report which found the majority of AI-IBSA available online targeted women [2]. We argue that the differences in sampling methodology may help explain these inconsistencies. For instance, in the event of sexual extortion, the content may not be made available online. It is possible that the trend seen in gender may be somewhat explained by the motivation of the perpetrator, given previous research to support higher rates of “sextortion” in men as compared to heterosexual women [23], including for adolescent populations [61]. Another explanation for this discrepancy is that research suggests most deepfake videos online (an estimated 96%) are sexualized images of women actors and musicians, who may be less likely to participate in survey research than non-public figures like our survey respondents [2].

While we asked broadly about motivations in the larger survey, we did not ask specifically ask about motivations for deepfake pornography perpetration or victimization. Motivations for such behaviors may range from personal sexual gratification, to use for extortion purposes, to demonstrating technical skill in the creation of content. A consideration for future research should be the use of AI-IBSA (as well as other forms of NSII not involving AI) for extortion, particularly given anecdotal evidence of significant associated harms (e.g., suicide, suicidal ideation, or suicidal attempts) [64].

5.4 Limitations and Mitigation

Cross-country surveys are logistically difficult and require significant consideration in interpreting findings. Our study had to address language differences and non-response bias. The former were addressed, as detailed above, through cross-validation. The latter was a more significant concern, as others have noted that language used to discuss a topic such as image-based sexual abuse and pornography more broadly will be culturally dependent [71]. We attempted to mitigate this by allowing respondents to select “Prefer not to say” for any question that involved self-reporting on behavior, as well as periodically reminding respondents of that option throughout the survey. Nevertheless, it is possible that selection bias may have resulted in an underestimation of prevalence, with respondents opting out of the survey due to personal experiences or demographic characteristics. South Korea, for example, had a high percentage of bisexual respondents. This may indicate that our attempts at mitigating non-response bias were less successful in South Korea. In addition, in this study, we evaluated prevalence based on respondents who indicated victimization with deepfake pornography specifically. Thus, we may have underestimated prevalence because we did not include respondents who were unsure, or who incorrectly indicated that their content was not deepfaked. Finally, we note that, despite explicit goals for representative sampling, the participant pool fell short, necessitating the need to weight results.

5.5 Design Considerations

In this study, we found that the majority of respondents across all countries were relatively unfamiliar with the concept of deepfake pornography. We also found that respondents, on average, found deepfake pornography behaviors to be deserving of punishment. Reported perpetration and victimization rates were low, with the most common behavior reported by respondents being the consumption of deepfake pornography. Finally, gender did play a part in both experiences and attitudes, with men deeming deepfake pornography behaviors as less harmful or deserving of punishment as compared to women. This has implications for both policy and practice.

As relatively few respondents were aware of deepfake pornography, this signals first and foremost that more education is needed to both deter potential perpetration and the consumption of AI-IBSA. Education efforts should focus on the harms suffered by victim-survivors, as well as the potential consequences of perpetration. Given the gender dynamics in terms of perceived harmfulness, prioritization around education should focus on reaching boys and men.

Figure 7:

Figure 7: Potential Intervention Points. The top half of this figure represents societal-level interventions, while the bottom half is more directed at individuals or companies in the deepfake ecosystem.

Second, many scholars have focused on the need for legislation to criminalize these image-based abuse behaviors and provide clear redress for victims [15, 20, 34, 52, 73]. In order for a law to have both general and specific deterrent effects, the consequences of breaking that law has to be known by the general public, and someone who does break the law has to be prosecuted and appropriately punished. Some of the trends—particularly the high rates of prevalence in both South Korea and Australia—indicate that laws are not a panacea to eliminate or even reduce this behavior. For instance, some Australian states and territories specifically criminalize the distribution of NSII [8], while South Korea has laws allowing for up to 3 years imprisonment for merely viewing non-consensual intimate imagery [14, 50]. Given that this behavior primarily happens online, that victim-survivors may not be aware of the material, or if they become aware, may be reluctant to report it, it can be argued that existing legislation provides weak deterrence since very few people have been prosecuted for NSII in jurisdictions that do have such laws in place. As we found in this study, there was also low awareness of the concept of deepfake pornography, and one may reasonably infer low awareness of related laws. While developing legislation around an issue helps create societal norms and signals the unacceptability of a behavior, it is crucial to concede the limitations of passing regulations in enacting true societal change. The limitations of law underscores the importance of developing parallel efforts, such as technical solutions, that may mitigate these harms by reducing the creation and dissemination of NSII.

Third, there are a number of points of potential intervention for technical solutions. First, discovery tools that allow individuals to search the web for sexual material via facial recognition technology may be empowering, although there are safety, privacy, and security concerns that need to be taken into account (e.g., building in protections such that individuals can only search for content of their own faces). Second, while many videos are posted on dedicated deepfake websites, or make no claims as to veracity, the use of deepfakes for sexual extortion indicate that technical methods of annotating content, such as watermarks, may help mitigate NSII harms [38]. Third, platforms can make choices around what they allow and disallow in their policies and community standards (legislation can assist in justifying the choices made by platforms). Google Search [35], Reddit [40], and PornHub[18] have already published dedicated NSII policies. Fourth, various stakeholders have worked to help create solutions for victims, primarily focusing on triaging after victimization has occurred. It will be necessary for these help-seeking resources (whether helplines, online forms, or chatbots [25, 50]) to be aware of this new type of image-based sexual abuse and up-to-date with the types of redress that victims may seek [4]. Fifth, where tools do have a payment or sign up flow, users could be required as part of the flow to ingest and acknowledge information about the harms and potential consequences of NSII creation and dissemination. Finally, platforms have demonstrated an ability to detect nude content or, in some cases, non-consensual nude or sexual content. If creator tools could use those tools to block creation, or at the very least serve a warning interstitial about the harms associated with NSII (including AI-IBSA), it may act as a further deterrent for uploading this content.

As practitioners and designers think about future considerations for intervention, it may be helpful to break this issue down into relevant behaviors as seen in Figure 7 and consider intervention options (which should be considered complementary, as opposed to solely sufficient) at each point in the process. The increased ease of access to NSII creation technologies was the catalyst for this project, and represents the first place at which interventions could be made. Are there ways, for example, to limit these tools’ abilities to create nude or intimate imagery? If creators get to the point where barriers have been hacked, are there warning messages that can either highlight associated harms to targets, or potential punishments associated with the creation or distribution of such content? After creation, are there ways to build in indicators of provenance (e.g., watermarking) that signal the synthetic nature of the content? Finally, assuming most impediments can be circumvented by a motivated creator, empathetic and multi-modal resources for victims could include both tooling to discover where relevant content is published, and help for seeking take downs and recourse more broadly. These are primarily design interventions, but overarching all of these would be the societal interventions of educational programming, and relevant legislation making clear potential punishments associated with each type of behavior (extortion, posting/distribution, etc), paired with prosecutorial actions that indicate a willingness to hold perpetrators to account.

5.6 Conclusion

In this study, we used a survey to explore general attitudes toward AI-IBSA (or “deepfake pornography”), as well as AI-IBSA behaviors and experiences in 10 different countries around the world. We found relatively low rates of awareness across countries, but respondents expressed concern and condemnation of image-based sexual abuse behaviors, including deepfake pornography (as seen in the questions about whether victims should get upset and criminalization, and in the qualitative responses). The most popular reported behavior was the consumption of celebrity deepfake pornography, while personal victimization and perpetration rates were quite low. Men tended to find consuming deepfake pornography less objectionable than women, which was also seen in the higher rates of self-reported consumption behavior. Finally, men were more likely to report both more perpetration and more victimization by deepfake pornography, a trend that future research should explore as we may expect prevalence to increase over time. Laws, policies, and support services can help to mitigate the harms of NSII; however, more resources need to be directed toward detecting and blocking the creation of content prior to dissemination, as well as preventing the abuse from happening in the first place through educational programs and initiatives.

Skip APPENDIX Section

APPENDIX

Table A1:
CountryAustraliaBelgiumDenmarkFranceMexico
Language(s) survey was offered inEnglishFrenchDanishFrenchSpanish
Dutch
Respondent count16511617163916361714
Gender51% women50% women51% women52% women51% women
48% men48% men49% men46% men48% men
Age 18-2410%9%9%9%16%
    25-3410%9%12%8%13%
    35-4411%10%5%9%12%
    45-549%10%10%10%10%
    55-649%10%8%11%8%
    65+12%11%15%12%4%
Sexual        Heterosexual87%85%91%85%84%
orientation     Gay/Lesbian4%4%3%4%3%
               Bisexual5%6%3%4%5%
               Prefer to self-describe2%2%1%2%2%
               Prefer not to say3%3%3%5%6%
CountryNetherlandsPolandSouth KoreaSpainUSA
LanguagesDutchPolishSouth KoreanSpanishEnglish
Respondent count16651639164117111780
Gender51% women52% women50% women52% women51% women
49% men47% men50% men48% men47% men
Age 18-2410%9%9%8%11%
    25-349%10%9%8%12%
    35-449%11%11%12%10%
    45-5411%9%11%12%10%
    55-649%11%13%13%10%
    65+14%9%5%8%11%
Sexual        Heterosexual86%86%68%89%82%
orientation     Gay/Lesbian4%2%3%3%7%
               Bisexual5%3%15%4%7%
               Prefer to self-describe1%2%5%1%2%
              Prefer not to say3%7%10%3%4%

Table A1: Participant Demographics. (Gender may not sum to 100% due to the small fraction of respondents who preferred to self-describe, not disclose, or reported as non-binary.)

Table A2:
CountryVictimization (95% CI)Perpetration (95% CI)
Australia3.7% (2.8%-4.7%)2.4%(1.6%-3.3%)
Belgium0.7% (0.4%-1.2%)0.3% (0.1%-0.7%)
Denmark1.0% (0.6%-1.6%)0.7% (0.4%-1.2%)
France1.6%(1.1%-2.3%)1.1% (0.7%-1.8%)
Mexico2.9% (2.2%-3.8%)1.2% (0.8%-1.8%)
Netherlands1.0% (0.6%-1.6%)0.4% (0.1%-0.7%)
Poland0.9% (0.5%-1.4%)0.5%(0.3%-1.0%)
South Korea3.1%(2.3%-4.0%)1.2% (0.7%-1.8%)
Spain1.4% (0.9%-2.1%)0.8% (0.4%-1.3%)
USA2.3% (1.6%-3.1%)2.6% (1.8%-3.5%)

Table A2: Percentage of Respondents Indicating Victimization or Perpetration Experience via Deepfake Pornography

Figure A1:

Figure A1: Familiarity with Deepfake Pornography by Gender

Figure A2:

Figure A2: Heatmap of Criminalization Attitudes by Country. Scale ranges from -2 (“Definitely should not be a crime”) to 2 (“Definitely should be a crime”). Brighter red is more deserving of criminalization, darker blue is less deserving of criminalization. Numbers shown are medians.

Figure A3:

Figure A3: Heatmap of Median Criminalization Attitudes by Gender. Scale ranges from -2 (“Definitely should not be a crime”) to 2 (“Definitely should be a crime”). Brighter red is more deserving of criminalization, darker blue is less deserving of criminalization. Numbers shown are medians.

Skip Supplemental Material Section

Supplemental Material

Video Preview

Video Preview

mp4

1.1 MB

Video Presentation

Video Presentation

mp4

36.9 MB

References

  1. Ali Acilar and Øystein Sæbø. 2023. Towards understanding the gender digital divide: A systematic literature review. Global Knowledge, Memory and Communication 72, 3 (Feb. 2023), 233–249. https://doi.org/10.3390/philosophies6010006Google ScholarGoogle ScholarCross RefCross Ref
  2. Henry Adjer, Giorgio Patrini, Francesco Cavalli, and Laurence Cullen. 2019. The State of Deepfakes: Landscape, Threats, and Impact. Technical Report. Deeptrace Labs. https://regmedia.co.uk/2019/10/08/deepfake_report.pdf.Google ScholarGoogle Scholar
  3. Shelby Akerley. 2020. Let’s talk about (fake) sex baby: A deep dive into the distributive harms of deepfake pornography. Arizona Law Journal of Emerging Technologies 4 (2020), 1–58.Google ScholarGoogle Scholar
  4. Nadisha-Marie Aliman, Leon Kester, and Roman Yampolskiy. 2021. Transdisciplinary AI observatory—retrospective analyses and future-oriented contradistinctions. Philosophies 6, 1 (Jan. 2021), 6. https://doi.org/10.3390/philosophies6010006Google ScholarGoogle ScholarCross RefCross Ref
  5. Tomas J Aragon, MP Fay, D Wollschlaeger, 2020. epitools: Epidemiology Tools. R package version 0.5-10.1. https://CRAN.R-project.org/package=epitools.Google ScholarGoogle Scholar
  6. Vincent Arel-Bundock. 2024. marginaleffects: Predictions, Comparisons, Slopes, Marginal Means, and Hypothesis Tests. https://marginaleffects.com/ R package version 0.17.0.9002.Google ScholarGoogle Scholar
  7. Alison Attrill-Smith, Caroline J. Wesson, Michelle L. Chater, and Lucy Weekes. 2022. Gender differences in videoed accounts of victim blaming for revenge porn for self-taken and stealth-taken sexually explicit images and videos. Cyberpsychology: Journal of Psychosocial Research on Cyberspace 15, 4 (Apr. 2022), Article 3. https://doi.org/10.5817/CP2021-4-3Google ScholarGoogle ScholarCross RefCross Ref
  8. Youth Law Australia. 2021. Image-Based Abuse | Youth Law Australia. https://yla.org.au/nt/topics/internet-phones-and-technology/image-based-abuse/.Google ScholarGoogle Scholar
  9. Shannon Bond. 2023. It takes a few dollars and 8 minutes to create a deepfake. And that’s only the start. https://www.npr.org/2023/03/23/1165146797/it-takes-a-few-dollars-and-8-minutes-to-create-a-deepfake-and-thats-only-the-start.Google ScholarGoogle Scholar
  10. Sarah Bothamley and Ruth J Tully. 2017. Understanding revenge pornography: Public perceptions of revenge pornography and victim blaming. Journal of Aggression, Conflict and Peace Research 10, 1 (Dec. 2017), 1–10. https://doi.org/10.1108/JACPR-09-2016-0253Google ScholarGoogle ScholarCross RefCross Ref
  11. Sergi D Bray, Shane D Johnson, and Bennett Kleinberg. 2023. Testing human ability to detect “deepfake”images of human faces. Journal of Cybersecurity 9, 1 (June 2023), tyad011. https://doi.org/10.1093/cybsec/tyad011Google ScholarGoogle ScholarCross RefCross Ref
  12. Yihan Cao, Siyu Li, Yixin Liu, Zhiling Yan, Yutong Dai, Philip S. Yu, and Lichao Sun. 2023. A comprehensive survey of AI-generated content (AIGC): A history of generative AI from GAN to ChatGPT. arxiv:2303.04226 [cs.AI]Google ScholarGoogle Scholar
  13. Chance Carter. 2022. Reflections on revenge porn: Illustrating why the legal system should adopt a comprehensive response to nonconsensual pornography in the US. Montana Law Review 83 (Sept. 2022), 293–322. Issue 2. https://scholarworks.umt.edu/cgi/viewcontent.cgi?article=2503&context=mlr.Google ScholarGoogle Scholar
  14. Korea Law Translation Center. 2019. Act On Special Cases Concerning The Punishment Of Sexual Crimes. (Aug. 2019).Google ScholarGoogle Scholar
  15. Bobby Chesney and Danielle Citron. 2019. Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review 107 (Dec. 2019), 1753–1820. https://doi.org/10.2139/ssrn.3213954Google ScholarGoogle ScholarCross RefCross Ref
  16. Danielle Keats Citron and Mary Anne Franks. 2014. Criminalizing revenge porn. Wake Forest Law Review 49 (May 2014), 345–391.Google ScholarGoogle Scholar
  17. Justin D Cochran and Stuart A Napshin. 2021. Deepfakes: Awareness, concerns, and platform accountability. Cyberpsychology, Behavior, and Social Networking 24, 3 (March 2021), 164–172. https://doi.org/10.1089/cyber.2020.0100Google ScholarGoogle ScholarCross RefCross Ref
  18. Samantha Cole. 2018. Pornhub is banning AI-generated fake porn videos, says they’re nonconsensual. https://www.vice.com/en/article/zmwvdw/pornhub-bans-deepfakes.Google ScholarGoogle Scholar
  19. Rob Cover. 2022. Deepfake culture: The emergence of audio-video deception as an object of social anxiety and regulation. Continuum 36, 4 (2022), 609–621. https://doi.org/10.1080/10304312.2022.2084039Google ScholarGoogle ScholarCross RefCross Ref
  20. Rebecca A Delfino. 2019. Pornographic deepfakes: The case for federal criminalization of revenge porn’s next tragic act. Fordham Law Review 88 (2019), 887–938.Google ScholarGoogle Scholar
  21. Brian Dolhansky, Joanna Bitton, Ben Pflaum, Jikuo Lu, Russ Howes, Menglin Wang, and Cristian Canton Ferrer. 2020. The Deepfake Detection Challenge (DFDC) Dataset. arxiv:2006.07397 [cs.CV]Google ScholarGoogle Scholar
  22. Suzie Dunn. 2021. Women, not politicians, are targeted most often by deepfake videos. Centre for International Governance Innovation (March 2021). https://www.cigionline.org/articles/women-not-politicians-are-targeted-most-often-deepfake-videos/?s=03.Google ScholarGoogle Scholar
  23. Asia A Eaton, Divya Ramjee, and Jessica F Saunders. 2023. The relationship between sextortion during COVID-19 and pre-pandemic intimate partner violence: A large study of victimization among diverse US men and women. Victims & Offenders 18, 2 (2023), 338–355. https://doi.org/10.1080/15564886.2021.2022057Google ScholarGoogle ScholarCross RefCross Ref
  24. eSafety Commissioner. 2023. Image-Based Abuse. https://www.esafety.gov.au/key-topics/image-based-abuse.Google ScholarGoogle Scholar
  25. Mattia Falduti and Sergio Tessaris. 2022. On the use of chatbots to report non-consensual intimate images abuses: The legal expert perspective. In Proceedings of the 2022 ACM Conference on Information Technology for Social Good. 96–102.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Hany Farid. 2022. Creating, using, misusing, and detecting deep fakes. Journal of Online Trust and Safety 1, 4 (Sept. 2022). https://doi.org/10.54501/jots.v1i4.56Google ScholarGoogle ScholarCross RefCross Ref
  27. Matthew Feeney. 2021. Deepfake laws risk creating more problems than they solve. Regulatory Transparency Project (2021). https://rtp.fedsoc.org/wp-content/uploads/Paper-Deepfake-Laws-Risk-Creating-More-Problems-Than-They-Solve.pdf.Google ScholarGoogle Scholar
  28. Matthew F Ferraro, Jason C Chipman, and Stephen W Preston. 2020. The federal “deepfakes”law. The Journal of Robotics, Artificial Intelligence & Law 3, 4 (2020), 229–233.Google ScholarGoogle Scholar
  29. Dean Fido, Jaya Rao, and Craig A Harper. 2022. Celebrity status, sex, and variation in psychopathy predicts judgements of and proclivity to generate and distribute deepfake pornography. Computers in Human Behavior 129 (April 2022), 107141. https://doi.org/10.1016/j.chb.2021.107141Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Asher Flynn, Elena Cama, Anastasia Powell, and Adrian J Scott. 2023. Victim-blaming and image-based sexual abuse. Journal of Criminology 56, 1 (2023), 7–25. https://doi.org/10.1177/263380762211353Google ScholarGoogle ScholarCross RefCross Ref
  31. Greg Freedman Ellis and Ben Schneider. 2023. srvyr: ’dplyr’-Like Syntax for Summary Statistics of Survey Data. https://CRAN.R-project.org/package=srvyr R package version 1.2.0.Google ScholarGoogle Scholar
  32. Dilrukshi Gamage, Piyush Ghasiya, Vamshi Bonagiri, Mark E. Whiting, and Kazutoshi Sasahara. 2022. Are deepfakes concerning? Analyzing conversations of deepfakes on Reddit and exploring societal implications. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 103, 19 pages. https://doi.org/10.1145/3491102.3517446Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Dilrukshi Gamage, Kazutoshi Sasahara, and Jiayu Chen. 2021. The emergence of deepfakes and its societal implications: A systematic review.TTO (Oct. 2021), 28–39.Google ScholarGoogle Scholar
  34. Anne Pechenik Gieseke. 2020. “The new weapon of choice”: Law’s current inability to properly address deepfake pornography. Vanderbilt Law Review 73 (2020), 1479–1515.Google ScholarGoogle Scholar
  35. Google. 2023. Remove involuntary fake pornography from Google. https://support.google.com/websearch/answer/9116649?hl=en.Google ScholarGoogle Scholar
  36. Jeffrey Gottfried. 2019. About three-quarters of Americans favor steps to restrict altered videos and images. https://www.pewresearch.org/short-reads/2019/06/14/about-three-quarters-of-americans-favor-steps-to-restrict-altered-videos-and-images/.Google ScholarGoogle Scholar
  37. Matthew Groh, Ziv Epstein, Chaz Firestone, and Rosalind Picard. 2022. Deepfake detection by human crowds, machines, and machine-informed crowds. Proceedings of the National Academy of Sciences 119, 1 (2022), e2110013119.Google ScholarGoogle ScholarCross RefCross Ref
  38. Keith Raymond Harris. 2021. Video on demand: What deepfakes do and how they harm. Synthese 199, 5-6 (2021), 13373–13391. https://doi.org/10.1007/s11229-021-03379-yGoogle ScholarGoogle ScholarCross RefCross Ref
  39. Nicola Henry, Clare McGlynn, Asher Flynn, Kelly Johnson, Anastasia Powell, and Adrian J Scott. 2020. Image-based sexual abuse: A study on the causes and consequences of non-consensual nude or sexual imagery. Routledge, New York, NY.Google ScholarGoogle Scholar
  40. Sean Hollister. 2023. Reddit’s AI porn ban has a carveout for Rule 34. https://www.theverge.com/2023/7/6/21525998/reddit-ai-porn-fictional-characters-carveout-sexual-images.Google ScholarGoogle Scholar
  41. Tyrone Kirchengast. 2020. Deepfakes and image manipulation: criminalisation and control. Information & Communications Technology Law 29, 3 (2020), 308–323. https://doi.org/10.1080/13600834.2020.1794615Google ScholarGoogle ScholarCross RefCross Ref
  42. Nils C Köbis, Barbora Doležalová, and Ivan Soraperra. 2021. Fooled twice: People cannot detect deepfakes but think they can. Iscience 24, 11 (2021). https://doi.org/10.1016/j.isci.2021.103364Google ScholarGoogle ScholarCross RefCross Ref
  43. Pavel Korshunov and Sébastien Marcel. 2020. Deepfake detection: Humans vs. machines. arxiv:2009.03155 [cs.CV]Google ScholarGoogle Scholar
  44. Matthew B Kugler and Carly Pace. 2021. Deepfake privacy: Attitudes and regulation. Northwestern University Law Review 116 (2021), 611–680. https://doi.org/ssrn.3781968Google ScholarGoogle Scholar
  45. Amanda Lawson. 2023. A look at global deepfake regulation approaches. Responsible Artificial Intelligence Institute (April 2023). https://www.responsible.ai/post/a-look-at-global-deepfake-regulation-approaches.Google ScholarGoogle Scholar
  46. Min Joo Lee. 2021. Webcam modelling in Korea: Censorship, pornography, and eroticism. Porn Studies 8, 4 (June 2021), 485–498. https://doi.org/10.1080/23268743.2021.1901602Google ScholarGoogle ScholarCross RefCross Ref
  47. Minghui Li and Yan Wan. 2023. Norms or fun? The influence of ethical concerns and perceived enjoyment on the regulation of deepfake information. Internet Research (2023).Google ScholarGoogle Scholar
  48. Thomas Lumley. 2023. Survey: Analysis of Complex Survey Samples. R package version 4.2.Google ScholarGoogle Scholar
  49. Sophie Maddocks. 2020. “A deepfake porn plot intended to silence me”: Exploring continuities between pornographic and “political”deep fakes. Porn Studies 7, 4 (2020), 415–423. https://doi.org/10.1080/23268743.2020.1757499Google ScholarGoogle ScholarCross RefCross Ref
  50. Wookjae Maeng and Joonhwan Lee. 2022. Designing and evaluating a chatbot for survivors of image-based sexual abuse. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–21.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Kimberly T Mai, Sergi D Bray, Toby Davies, and Lewis D Griffin. 2023. Warning: Humans cannot reliably detect speech deepfakes. arXiv preprint arXiv:2301.07829 (2023).Google ScholarGoogle Scholar
  52. Karolina Mania. 2020. The legal implications and remedies concerning revenge porn and fake porn: A common law perspective. Sexuality & Culture 24, 6 (2020), 2079–2097.Google ScholarGoogle ScholarCross RefCross Ref
  53. Momina Masood, Mariam Nawaz, Khalid Mahmood Malik, Ali Javed, Aun Irtaza, and Hafiz Malik. 2023. Deepfakes generation and detection: State-of-the-art, open challenges, countermeasures, and way forward. Applied Intelligence 53, 4 (2023), 3974–4026.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Clare McGlynn, Erika Rackley, and Ruth Houghton. 2017. Beyond “revenge porn”: The continuum of image-based sexual abuse. Feminist Legal Studies 25 (2017), 25–46. https://doi.org/https//:10.1007/s10691-017-9343-2Google ScholarGoogle ScholarCross RefCross Ref
  55. Joseph D. Morelle. 2023. Preventing Deepfakes of Intimate Images Act. https://www.congress.gov/bill/117th-congress/house-bill/9631/text.Google ScholarGoogle Scholar
  56. Colette Mortreux, Karen Kellard, Nicola Henry, and Asher Flynn. 2019. Understanding the attitudes and motivations of adults who engage in image-based abuse. (2019). https://www.esafety.gov.au/sites/default/files/2019-10/Research_Report_IBA_Perp_Motivations.pdf.Google ScholarGoogle Scholar
  57. Aakash Varma Nadimpalli and Ajita Rattani. 2022. GBDF: Gender balanced deepfake dataset towards fair deepfake detection. arXiv preprint arXiv:2207.10246 (2022).Google ScholarGoogle Scholar
  58. Sophie J Nightingale and Hany Farid. 2022. AI-synthesized faces are indistinguishable from real faces and more trustworthy. Proceedings of the National Academy of Sciences 119, 8 (2022), e2120481119.Google ScholarGoogle ScholarCross RefCross Ref
  59. Office of the eSafety Commissioner Australia. 2023. Regulatory Schemes. https://www.esafety.gov.au/about-us/who-we-are/regulatory-schemes##image-based-abuse-scheme.Google ScholarGoogle Scholar
  60. Carl Öhman. 2020. Introducing the pervert’s dilemma: A contribution to the critique of deepfake pornography. Ethics and Information Technology 22, 2 (2020), 133–140. https://doi.org/10.1007/s10676-019-09522-1Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Justin W Patchin and Sameer Hinduja. 2020. Sextortion among adolescents: Results from a national survey of US youth. Sexual Abuse 32, 1 (2020), 30–54. https://doi.org/10.1177/1079063218800469Google ScholarGoogle ScholarCross RefCross Ref
  62. Anastasia Powell and Nicola Henry. 2019. Technology-facilitated sexual violence victimization: Results from an online survey of Australian adults. Journal of Interpersonal Violence 34, 17 (2019), 3637–3665. https://doi.org/10.1177/0886260516672Google ScholarGoogle ScholarCross RefCross Ref
  63. Anastasia Powell, Adrian Scott, Asher Flynn, and Nicola Henry. 2020. Image-based sexual abuse: An international study of victims and perpetrators – A summary report. (2020). https://research.monash.edu/en/publications/image-based-sexual-abuse-an-international-study-of-victims-and-pe.Google ScholarGoogle Scholar
  64. Jiameng Pu, Neal Mangaokar, Lauren Kelly, Parantapa Bhattacharya, Kavya Sundaram, Mobin Javed, Bolun Wang, and Bimal Viswanath. 2021. Deepfake videos in the wild: Analysis and detection. In Proceedings of the Web Conference 2021 (Ljubljana, Slovenia) (WWW ’21). Association for Computing Machinery, New York, NY, USA, 981–992. https://doi.org/10.1145/3442381.3449978Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. R Core Team. 2023. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/Google ScholarGoogle Scholar
  66. Md Shohel Rana and Andrew H. Sung. 2023. Deepfake detection: A tutorial. In Proceedings of the 9th ACM International Workshop on Security and Privacy Analytics (Charlotte, NC, USA) (IWSPA ’23). Association for Computing Machinery, New York, NY, USA, 55–56. https://doi.org/10.1145/3579987.3586562Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Regina Rini and Leah Cohen. 2022. Deepfakes, deep harms. Journal of Ethics & Social Philosophy 22 (2022). Issue 2. https://doi.org/10.26556/jesp.v22i2.1628Google ScholarGoogle ScholarCross RefCross Ref
  68. Yanet Ruvalcaba and Asia A Eaton. 2020. Nonconsensual pornography among US adults: A sexual scripts framework on victimization, perpetration, and health correlates for women and men.Psychology of Violence 10, 1 (2020), 68–78. https://doi.org/10.1037/vio0000233Google ScholarGoogle ScholarCross RefCross Ref
  69. Georgina Ryan-White. 2022. Cyberflashing and deepfake pornography. (2022). http://www.niassembly.gov.uk/globalassets/documents/raise/publications/2017-2022/2022/justice/0122.pdfGoogle ScholarGoogle Scholar
  70. Paul Sandle. 2021. UK’s YouGov says demand from Silicon Valley clients holding up. Reuters (March 2021). https://www.reuters.com/business/uks-yougov-says-demand-silicon-valley-clients-holding-up-2023-03-21/.Google ScholarGoogle Scholar
  71. Sarita Schoenebeck, Amna Batool, Giang Do, Sylvia Darling, Gabriel Grill, Daricia Wilkinson, Mehtab Khan, Kentaro Toyama, and Louise Ashwell. 2023. Online harassment in majority contexts: Examining harms and remedies across countries. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Farhana Shahid, Srujana Kamath, Annie Sidotam, Vivian Jiang, Alexa Batino, and Aditya Vashistha. 2022. ”It matches my worldview”: Examining perceptions and attitudes around fake videos. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Isnaini Imroatus Solichah, Faizin Sulistio, and Milda Istiqomah. 2023. Protection of victims of deep fake pornography in a legal perspective in Indonesia. International Journal of Multicultural and Multireligious Understanding 10, 1 (2023), 383–390.Google ScholarGoogle Scholar
  74. Laura Stroebel, Mark Llewellyn, Tricia Hartley, Tsui Shan Ip, and Mohiuddin Ahmed. 2023. A systematic literature review on the effectiveness of deepfake detection techniques. Journal of Cyber Security Technology 7, 2 (2023), 83–113. https://doi.org/10.1080/23742917.2023.2192888Google ScholarGoogle ScholarCross RefCross Ref
  75. Lisa Sugiura and April Smith. 2020. Victim blaming, responsibilization and resilience in online sexual abuse and harassment. Victimology: Research, Policy and Activism (2020), 45–79.Google ScholarGoogle Scholar
  76. Gail M Sullivan and Anthony R Artino Jr. 2013. Analyzing and interpreting data from Likert-type scales. Journal of Graduate Medical Education 5, 4 (2013), 541–542. https://doi.org/JGME-5-4-18Google ScholarGoogle ScholarCross RefCross Ref
  77. Nyein Nyein Thaw, Thin July, Aye Nu Wai, Dion Hoe-Lian Goh, and Alton YK Chua. 2020. Is it real? A study on detecting deepfake videos. Proceedings of the Association for Information Science and Technology 57, 1 (2020), e366.Google ScholarGoogle ScholarCross RefCross Ref
  78. Loc Trinh and Yan Liu. 2021. An examination of fairness of AI models for deepfake detection. arxiv:2105.00558 [cs.CV]Google ScholarGoogle Scholar
  79. Kate Walker and Emma Sleath. 2017. A systematic review of the current knowledge regarding revenge pornography and non-consensual sharing of sexually explicit media. Aggression and Violent Behavior 36 (2017), 9–24. https://doi.org/10.1016/j.avb.2017.06.010Google ScholarGoogle ScholarCross RefCross Ref
  80. Moncarol Y Wang. 2022. Don’t believe your eyes: Fighting deepfaked nonconsensual pornography with tort law. University of Chicago Legal Forum (2022), 415–445.Google ScholarGoogle Scholar
  81. Mika Westerlund. 2019. The emergence of deepfake technology: A review. Technology Innovation Management Review 9, 11 (2019), 40–53. https://doi.org/10.22215/timreview/1282Google ScholarGoogle ScholarCross RefCross Ref
  82. David Gray Widder, Dawn Nafus, Laura Dabbish, and James Herbsleb. 2022. Limits and possibilities for “Ethical AI” in open source: A study of deepfakes. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 2035–2046.Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. Deressa Wodajo and Solomon Atnafu. 2021. Deepfake video detection using convolutional vision transformer. arXiv preprint arXiv:2102.11126 (2021).Google ScholarGoogle Scholar
  84. Ying Xu, Philipp Terhörst, Kiran Raja, and Marius Pedersen. 2022. A comprehensive analysis of AI biases in deepfake detection with massively annotated databases. arXiv preprint arXiv:2208.05845 (2022).Google ScholarGoogle Scholar

Index Terms

  1. Non-Consensual Synthetic Intimate Imagery: Prevalence, Attitudes, and Knowledge in 10 Countries

              Recommendations

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in
              • Published in

                cover image ACM Conferences
                CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems
                May 2024
                18961 pages
                ISBN:9798400703300
                DOI:10.1145/3613904

                Copyright © 2024 Owner/Author

                Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 11 May 2024

                Check for updates

                Qualifiers

                • research-article
                • Research
                • Refereed limited

                Acceptance Rates

                Overall Acceptance Rate6,199of26,314submissions,24%
              • Article Metrics

                • Downloads (Last 12 months)342
                • Downloads (Last 6 weeks)342

                Other Metrics

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader

              HTML Format

              View this article in HTML Format .

              View HTML Format