skip to main content
10.1145/3613904.3642461acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open Access

Generative AI in Creative Practice: ML-Artist Folk Theories of T2I Use, Harm, and Harm-Reduction

Published:11 May 2024Publication History

Abstract

Understanding how communities experience algorithms is necessary to mitigate potential harmful impacts. This paper presents folk theories of text-to-image (T2I) models to enrich understanding of how artist communities experience creative machine learning systems. This research draws on data collected from a workshop with 15 artists from 10 countries who incorporate T2I models in their creative practice. Through reflexive thematic analysis of workshop data, we highlight artist folk theories of T2I use, harm, and harm reduction. Folk theories of use envision T2I models as an artistic medium, a mundane tool, and locate true creativity as rising above model affordances. Theories of harm articulate T2I models as harmed by engineering efforts to eliminate glitches and product policy efforts to limit functionality. Theories of harm-reduction orient towards protecting T2I models for creative practice through transparency and distributed governance. We examine how these theories relate, and conclude by discussing how folk theorization informs responsible AI efforts.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Contemporary art scenes have seen new movements that incorporate the practice of machine learning (ML) [3, 53, 113], permeating art worlds related to producing [71], presenting [55], promoting [35, 96], and buying art [120]. Alongside expansion of ML-art worlds is public discourse related to model training data [34, 80], global considerations concerning copyright [41], compensation [46], automation of creative knowledge work [119], and job displacement within creative economies [25, 97]. Underlying these concerns are critiques of how creative ML, including text-to-image (T2I) tools [1, 77, 82], will function as technologies of creative and economic control by chilling cultural production and homogenizing art through style mimicry [56]. Human-computer interaction (HCI) research recognizes the need for reflexive community engagement to proactively bridge gaps between developer expectations and how communities actually use and experience technologies [22, 36]. Such a human-centered approach to how ML-artists experience T2I models can enrich understanding of the systems of thought underpinning T2I use in creative practice and locate intervention points to inform HCI research and responsible AI practice.

One approach to understanding ML-artists’ systems of thought is through the lens of algorithmic folk theories: the theories people hold to explain, interpret, and intervene in sociotechnical systems [45, p. 3]. Differently situated communities hold folk theories dynamically shaped by their experiences, goals in a social context, and prior use of an algorithmic system [28, 29]. HCI research has employed folk theories as an analytical lens to understand communities’ situated knowledge about technologies [28, 29, 38, 45, 79], perceived characteristics (e.g., transparency, accuracy), and resulting sociotechnical harms [62, 110, 128], including for novel or emerging technologies [72]. As such, attention to ML-artists’ folk theorization — as one distinct user community — offers researchers and practitioners rich insight to understand how they situate creative ML technologies in their lives in alignment with their algorithmic awareness, motivations, and imaginaries. While prior work examines the politics and practices of ML-art communities (e.g., [14, 39, 40, 51, 60]), a dearth of work engages ML-artists on sociotechnical harms from T2I tools and their rationales towards reducing them.

In this paper, we describe findings from a qualitative study with ML-artists’ uncovering their folk theorization related to T2I-mediated creativity. We convened 15 ML-artists from France, Hong Kong, India, Kenya, Poland, Sweden, Switzerland, the U.K., the U.S., and Zimbabwe. We engaged them in three interactive workshops to elicit conversation around their situated expertise and experience with T2I-mediated creativity. Our research questions include:

RQ1:

How do artists integrate T2I models in their creative practice? How do they frame creativity with respect to the use of T2I models?

RQ2:

What are ML-artists’ perspectives toward potential types and drivers of harm from T2I models?

RQ3:

What are their perspectives toward harm reduction efforts when it comes to the development of T2I models?

We draw on a reflexive thematic analysis of artists’ verbal and written responses to workshop activities to make the following research contributions:

An empirical account of how cross-cultural ML-artists understand T2I-mediated creativity and its social impacts. We focus specifically on how they articulate potential harms emanating from T2I models and consider them an artistic medium harmed through engineering efforts to polish and safeguard models.

An analysis of folk theorization by cross-cultural artists who employ T2I models. We identified three high-level sets of folk theories related to using T2I models, their potential harms, and harm reduction strategies. We illuminate how these overlapping folk theories inform each other, highlighting the value of exploring folk theories across these three dimensions.

A discussion of how ML-artist folk theorization intersects with popular T2I discourses and how employing folk theory as an analytical lens illuminates how different publics frame sociotechnical problems and solutions, which can strengthen responsible AI research and policy.

Our analysis of ML-artist folk theorization reveals a wide range of beliefs and normative expectations. Related to using T2I models in creative practice, we find ML-artists articulate “creativity” as innovative use that exceeds basic model affordances. Depending on the context, artists frame T2I models as a medium to incorporate into their personal art practice or as a collaboration tool for client-based work. Recognizing the contextual and non-deterministic nature of harm of T2I models in creative practice, we find ML-artists theorize T2I as something harmed by engineering efforts to eliminate glitches and product policy efforts to limit functionality. This folk theory is directly informed by their beliefs that T2I models are an important medium and tool for creative practice, necessitating their protection. It also shapes their beliefs about harm-reduction, which orient towards protecting T2I models for creative practice through transparency and distributed governance.

After introducing these theories, we discuss the need to examine folk theorization across technological use, harm, and harm reduction dimensions. We found looking across these dimensions illuminates how this community’s desired uses of sociotechnical systems are entangled with user knowledge about harm and harm reduction. These folk theories can inform responsible AI development by calling attention to fundamental questions of harm and the frictions between the values encoded in algorithmic systems and those held by communities.

Skip 2RELATED WORK Section

2 RELATED WORK

2.1 The Practice of ML in the Arts: Situated Perceptions, Politics, and Use of Technology

The practice of ML in the arts is part of a longer history of art-and-technology dating to the era of mainframe computing when the first “purely aesthetic image [was] made on a computer” [59, p. 39]. The earliest connections between ML and art date to the 1970s when Harold Cohen developed the rule-based ML-art algorithm AARON [19]. While artists continued to create and exhibit conventional ML artwork provoking questions about relationships between society and technology (e.g., [52, 122]), the development of neural networks — and specifically Generative Adversarial Networks (GANs) in 2014 — marks a pivotal moment in ML-mediated art. GANs generate new content (e.g., image, text, audio) based on the data it was trained on [44]. As developers open-sourced GANs, artists shared resources and techniques about ML-mediated creativity [3]. Making sense of how different artist communities experience ML models thus requires attention to their socially situated context of use [32, 121]. While ML-art communities cohere in their engagement with computation, the varied politics of art worlds and artist motivations “fractures that technocultural material into millions of heterogeneous interests and agendas, specific investigations, aesthetics, approaches, and projects” [124, p. 5]. Consequently, it is important to examine the situated knowledge and beliefs of different artist communities.

Far from cohesive, various art-and-technology communities hold different understandings of the relationship between technology and art, from a depoliticized embrace of technology to a critical stance on the social impacts of technology on society [81, 84, 114, 121]. Prior HCI scholarship examines the techniques [71, 104] and politics [15] of critical ML-art communities in terms of how artists perceive and materially engage with models. Artists who employ ML in their creative practice often use appropriation and experimentation in ways they perceive as countercultural to dominant ML engineering values [32], such as accuracy, productivity, and performance [15, 98].

Moreover, ML-artists characterize ML models and data as creative material to be reworked through their practice [104]. As Caramiaux and Alaoui [15, p. 9] emphasize, artists “wor[k] with AI…through a concrete experience of the algorithm’s behaviors rather than a theoretical understanding of its capabilities.” These studies characterize this orientation to creative ML as a “craft approach,” where artists approach code and ML models as material to be (re)shaped and (re)formed in creative practice [71, 104]. This “craft approach" encompasses manipulating training datasets, altering algorithms, and developing new AI models. Here, the practice of ML in the arts is characterized by iterative engagement with models as a process and material [99], embrace of the unpredictable nature of models characterized by errors and glitches, and reframing ML as an instrument for creation [15]. In short, ML-artists engagement with ML reflects their situated use and appropriation of these algorithms, raising questions about how off-the-shelf creative ML, such as T2I models, shift the conversation.

2.2 T2I Models, Sociotechnical Harms, and Harm-Reduction in Responsible AI Practice

Text-to-image (T2I) models allow users to create photorealistic images from open-ended text prompts [89, 91]. The release of beta T2I products in mid-2022, such as Stable Diffusion [1], Midjourney [77], and DALL-E-2 [82], made technologies previously accessible to researchers and artists in limited capacities, available to wider publics. Without learning to manipulate code, illustrate, paint, or photograph, people can generate high-quality, complex images comparable to an experienced artist [75, 94]. This catalyzed discussion among artist communities about T2I-mediated creativity, including discourses of how T2I might expand [47, 85] or erode creative pursuits [64, 74]. Whereas some artist communities embrace T2I tools (e.g., the “promptism" movement [16, 57]), others have banned its use in art forums, competitions, and conventions [4, 33, 86]. A focus of public discussion among artist communities and researchers is the harm to creative practice (e.g., [20, 66, 97, 126]). These conversations include questions of consent [7] and use of artist’s work in training data [34, 109], global legal deliberations regarding copyright [86, 93], compensation [78], and macro-economic harms to creative labor, including job displacement [18]. Alongside these discussions are increasing social and regulatory expectations that ML models and products will be developed “safely and responsibly" [21, 54], which requires understanding of contextual use.

As a field, responsible AI is concerned with developing strategies to reduce sociotechnical harms from algorithmic systems [9, 87, 90, 102, 106], that is the “adverse lived experiences resulting from a system’s deployment and operation in the world" including computational and contextual harms [105, p. 723]. However, the field has been critiqued for its emphasis on computational harms and technical problems in model pipelines influencing the representational politics of generated content [10, 63, 105]. Much responsible AI research focuses on examining the ability of T2I models to reproduce demeaning stereotypes that reinforce unjust social hierarchies along intersecting axes of race [6], disability [42], and geopolitical cultures [5, 88, 118]. Common strategies to address computational harms focus on model-level content moderation strategies, such as implementing safety classifiers that restrict model inputs or outputs, blocklists, and training data remediation [48]. While addressing the representational politics of T2I models is an important area of research, what constitutes harm and effective harm reduction interventions are contextual and require an understanding of their different situated uses.

When questions of sociotechnical harm and harm reduction are discussed, responsible AI scholars recognize the importance of engaging communities to surface meaningful intervention touchpoints [8, 61]. Recent responsible AI approaches to harm identification and reduction for T2I models advocate bringing in stakeholder voices through qualitative engagements that draw attention to the ML product lifecycle [92] and center communities as experts in articulating how technologies replicate systems of social power [88] that are often challenging for developers to see due to “privilege hazards" [31]. However, there is room to strengthen community engagement in responsible AI research through analytical lenses, such as folk theories [28, 108], that enable a rich understanding of users’ situated knowledge.

2.3 Folk Theories and User Knowledge about AI

Algorithmic folk theories are the beliefs that users — with different levels of expertise and experience — develop to explain the outcomes, effects, or consequences of algorithmic systems [29, 45]. While HCI employs various definitions and methods to surface and analyze folk theories, the field is aligned in the utility of folk theories as an analytical lens for understanding user beliefs, or knowledge, about algorithmic systems [30].1 As Willett Kempton [65, p. 75] describes, “the word ‘folk’ signifies both that these theories are shared by a social group and that they are acquired from everyday experience or social interaction.” As such, folk theories are a malleable way users of algorithmic systems make sense of a system in relation to sociocultural dynamics [28, 62, 100].

Importantly, communities may hold “complex, multi-part folk theories" [28, 30] about a given algorithmic system that shapes how they interact with it [108] and understand developer practices [62], such as data collection [72]. Folk theories also (de)motivate the use of technology [62] and shape how communities employ [117] or resist [29] technology affordances. The folk theory analytical lens thus enables useful insights to understand how people experience algorithmic technologies in their lives and social relations [2, 108, 129], which can inform responsible AI practice.

Folk theorization influences how people form different relationships to technologies shaped by users’ algorithmic awareness, motivations, and imaginaries [26, 72] and enact agency considering their working knowledge of a system (e.g., [23, 28, 29, 67]). Put differently, they illuminate one dimension of algorithm-culture relationships that “situate people’s systems of thought and practices within the specific cultural conditions in which algorithmic use takes place” [107, p. 61]. Depending on the context, the social meaning and power users’ ascribe to algorithms may vary, including theorizing “the work” of algorithms as “confining, practical, reductive, intangible, and exploitative” [129, p. 807]. In their study of Spotify, for instance, Siles and colleagues [108, p. 11] find different folk “theories provide users resources to carry out strategies of action through which they enact different modalities of power and resistance.” In this way, folk theories illuminate user knowledge and how this is entangled with broader social discourses and algorithmic cultures [101].

In sum, algorithmic folk theory literature underscores how attention to user knowledge enriches understanding of how differently situated communities experience algorithms, which can illuminate both problems in algorithmic design and presentation [29] and from what standpoints they formulate knowledge [26]. Responsible AI literature underscores the importance of engaging with and understanding community viewpoints to surface grounded intervention points [88]. However, the field still grapples with questions about using ML in creative practice, potential harms, and harm reduction. To date, no studies examine ML-artist folk theorization related to these dimensions. Focusing on ML-artists who use T2I in this study, we sought to understand their beliefs toward using T2I models in their creative practice, harms from use, and corresponding harm reduction strategies.

Skip 3METHODOLOGY Section

3 METHODOLOGY

Our work builds on HCI literature examining the politics and practices of ML-artists [14, 39, 51, 60, 127] to uncover their folk theories about the use of T2I models, their potential harms, and corresponding harm reduction efforts.

3.1 Workshop as Method

We conducted three semi-structured workshops as part of a two-day engagement with 15 artists who incorporate T2I models in their creative practice. We engaged artists as a “community of practice" [123] with shared social and material interests [24] in ML-mediated art. We chose a participatory workshop methodology (e.g., [49, 70, 95]) to engage artists and elicit conversation around their situated knowledge and experience with T2I models.

Participatory workshops offer a method and instrument to animate participants’ social alignments [95], fostering “opportunity for knowledge exchange between researcher and participant” [49, p. 3]. In this study, workshop as method allowed for lively discussion and interaction among participants in a collaborative environment. These were not co-design workshops for developing a prototype, feature, or design recommendations [112]; we characterized them as workshops to participants given their multimodal nature and collaborative activities to formulate sociotechnical harms and reduction strategies.

3.2 Participant Recruitment

To recruit participants, we used a research partner, Google Arts and Culture, which is a non-commercial institute that works with global cultural organizations and artists.2 We also employed snowball sampling [83], as interested participants shared study information with others in their networks. To participate in the study, participants needed to (1) identify as an artist, (2) be familiar with T2I models, (3) have incorporated T2I models in their creative practice, and (4) be 18 years old or older. We did not have specific quotas, yet aimed to recruit a geographically diverse group of artists, including those with and without formal training (i.e., self-taught artists), and thus did not use that as exclusion criteria.3

Recruitment began in early October 2022. Of the 26 candidates contacted to participate, 15 were accepted. The participants included six women and nine men from France, Hong Kong, India, Kenya, Poland, Sweden, Switzerland, the U.K., the U.S., and Zimbabwe. Participants completed an informed consent form before participating in the workshop. We offer high-level information about the artists’ self-described background to prime our Findings (Section 4); to preserve their privacy, we do not detail information about each artist.

Table 1:
SessionTopicMain Prompts
Workshop 1Creative ProcessDescribe your work as an artist and what audiences you prioritize? What top values do you exercise and care about as an artist?
T2I in Creative PracticeHow do you currently use T2I models in your creation process? Draw your process of using T2I models in your art creation practice? How does use of T2I models amplify, hinder, or complicate your values?
Workshop 2Harm & ArtWhat potential harms might arise from art not incorporating ML? What potential harms might arise from art incorporating ML? Who is impacted? What do you consider the primary source of these harms (e.g., technologies, structures, policies, processes, practices)?
Workshop 3Harm ReductionWhich harms are most important to minimize or eliminate? Why these harms? What are potential ways of mitigating these harms?
ResponsibilityWhat communities need to be involved to help mitigate these harms?

Table 1: Overview of Workshop Discussion Guide.

3.3 Workshop Structure and Activities

The artists were invited to participate in a two-day series of structured, participant-directed educational talks and workshops comprising open-ended questions and activities (Table 1). Workshop activities were conducted in English; all participants were fluent. The event was held virtually to accommodate global participation and recorded with consent. The first day began with group introductions, inviting them to share information about their creative practice and where they work. This was followed by educational presentations to ground participants in a common language to aid discussion, with extensive time to discuss the impacts of these models on creative practice.

On the second day, we held the workshops, lasting 75-90 minutes, with 15-minute breaks between sessions. Workshop activities were chosen based on prior research workshops [95] and authors’ experience conducting similarly structured research with communities of practice, in which creating space for individual activities and group discussion cultivated rich insights. Five people, including the authors, facilitated the sessions.

Workshop 1 began with a teaching presentation offering an introductory description of T2I architecture to ground the conversation (10 minutes), followed by 65 minutes of semi-structured activities on (1) artists’ creative process, (2) the values they exercise in that process, and (3) how they incorporate T2I into their creative process. We prompted these with open-ended questions and invited them to individually document using freeform text or drawing images onto a shared online whiteboard, Jamboard, used throughout the sessions. We then engaged them in group discussions about their use of T2I models and how T2I is (dis)similar to other mediums.

Workshop 2 began with a teaching presentation on sociotechnical harms, such as ML fairness, interpersonal, and societal harms (15 minutes), followed by 75 minutes of small group discussion (n = 3-4) where they brainstormed (1) who or what could be harmed by using T2I models in creative practice and (2) the source of harms from T2I models. We randomly divided participants into five breakout groups with a facilitator who captured artists’ reflections into a notes document (30 minutes). We then reconvened, where they discussed breakout group insights.

Workshop 3 focused on (1) potential strategies to mitigate identified harms and (2) a discussion of who needs to be involved in mitigating harms effectively. We prompted discussion through open-ended questions that participants individually documented on the Jamboard, followed by group discussion. We sent a follow-up survey to collect demographic questions.

3.4 Data Analysis

Three sets of data were collected from the workshops, including the transcribed video recording, facilitator notes, and workshop activities captured into the online whiteboard. We analyzed data in parallel using Reflexive Thematic Analysis (RTA) [11, 12]. Our use of RTA was informed by constructivist approaches foregrounding how people construct situated knowledge about technologies [50, 68] and our RQs outlined in the Introduction (Section 1). RTA’s theoretical flexibility opened the possibility for inductive analysis of artist practices and viewpoints that could be informed by frameworks concerned with understanding user knowledge and reasoning about technology interactions, expectations, and practices, which in our case was folk theories. Thus, our analysis focused on ML-artists’ beliefs about T2I tools and responsibility guardrails.

Two authors conducted data analysis (January—May 2023), independently coding the three data sources and developing themes. Data sources were iteratively read by the first and second authors to become deeply familiar with the data before coding. Initial codes, which are “analytically interesting idea[s], concept[s] or meaning[s] associated with particular segments of data [13, p. 53], were derived free-form, using qualitative analysis software (NVivo 12). Themes, “pattern[s] of shared meaning organized around a central concept" [13, p. 77], were recursively developed from codes. The two authors held four rounds of iterative discussion, moving from open codes to thematic discussions and resolving disagreements.

As we developed our analysis, we held folk theories as a potential overarching theme or structuring device, as we interpreted data reflecting their perceptions about the use and relationships of T2I models in creative practice. In the second stage of analysis, the authors collaboratively developed the eleven folk theories through interpretation and using coded extracts to revise and refine themes, looking for participants’ explanations about using T2I in creative practice, attitudes about harm, and what harm reduction practices or interventions are important to them. A number of rough thematic maps were used to develop the folk theories, and the analysis was iterative. In the final stages, we named the folk theories to capture their “essence and analytical direction" as part of the RTA process [11, p. 112]. Lastly, the two authors confirmed the findings with the third author.

Table 2:
DimensionML-Artist Folk Theories
Using T2I in creative practice1. T2I models are an artistic medium, with specific information-rich properties 2. T2I models are a mundane tool for prototyping and collaborator communication 3. T2I models expand access to and forms of creative expression 4. True creativity involves rising above basic T2I affordances
Harms of using T2I in creative practice1. Efforts to eliminate failures, glitches, and bugs harms T2I models as artistic medium 2. Limiting the functionality and release of T2I models harms creative practice 3. Harms from use are not deterministic but contextual, distributed unequally across contexts
Reducing harms of using T2I in creative practice1. Transparency enables creative practice while informing others about appropriate T2I uses 2. Expanding artist control over model parameters protect T2I as medium 3. Harm reduction responsibility should be distributed among artists, developers, & moderators

Table 2: Overview of ML-Artist T2I Folk Theories.

3.5 Author Positionality

Our team comprises researchers with diverse disciplinary expertise, including responsible AI, machine learning, computer engineering, and Science and Technology Studies. In addition, two authors have experience working in and supporting visual arts communities: one author previously worked in a design studio in the U.S. South; another author has organized workshops on ML-art in the computer vision community. All authors — and the two additional facilitators in the breakout groups — currently work in institutions in the Global North, and have experience shaping ML pipelines from responsibility and equity-oriented standpoints.

The research team relied on our scholarly and professional experiences during the analysis, which was especially generative in discussing experiences with ML pipelines, responsible AI practice, and creative practice. While the research team drew on our academic and professional experiences during the analysis, we reflexively challenged our assumptions and interpretations during the analysis. In particular, we conducted this analysis amidst the rising popularity and associated critique of T2I tools in the public sphere. We reflected on both sides of what is an oft-polarized debate about creative ML to understand to what extent these discourses were present in the data.

3.6 Limitations

Although this study offers critical insights into ML-artist folk theorization, it has limitations. Our study focuses on the perspectives of artists who have or currently employ T2I models in their creative practice and thus reflects the perspectives of those who likely hold more favorable views toward T2I models. As such, this study does not capture the folk theorization of artists critical towards using T2I models, artists who do not have access to these models, or artists whose practice does not include digital art. Although our recruitment through a research partner enabled us to convene global artists who live and work in different cultural contexts, it limited the initial pool of artists to those with direct connections to the organization or those who have relationships with someone with a direct connection.

Moreover, our study design focused on understanding the perspectives and beliefs of one particular artist community and did not enable analysis across differently situated artist communities, such as (1) those who are unfamiliar with T2I models, (2) familiar but have not employed T2I models, and (3) those who have incorporated T2I models in their practice. Exploring how folk theories align or differ across differently situated communities and artistic intentions is a fruitful area for future research. Such work could potentially inform design or feature recommendations to develop T2I tools that better serve a wider range of artist communities.

Skip 4FINDINGS Section

4 FINDINGS

Our study examines how ML-artists experience T2I models in their creative practice and situate them in the broader field of creativity. We specifically focus on understanding ML-artist folk theorization related to (1) how they frame creativity with respect to the use of T2I models, (2) the types and drivers of harms from T2I models, and (3) perspectives toward harm reduction efforts (see Table 2). The ways the ML-artists in our study consider T2I models are shaped by the values and motivations they bring to their creative practice.

About the Artists. The artists in our study define themselves as working at the intersection of art and ML, with creative practices interrogating the role of technology in society. Their art explores questions concerning the social impacts of technology, how technology reshapes social relationships, and how emerging technology transforms historic art mediums. Their artwork takes many forms, including sculpture, film, performance art, and interactive installations. In addition to ML, they employ a range of other digital mediums, including video, photography, sound, other algorithms or code, and hand-crafted or hand-written elements.

These artists occupy multiple roles in art worlds, such as curation, grant funding, organizing community art organizations, and working in creative industries. Each of these art worlds is shaped by its own social power dynamics, and our participants discussed how they bring their values to these spaces, particularly values of creativity, joy/play, diversity, humor, and innovation. Artists’ folk theorization of T2I models reflects these values and positionality in their creative practice, which we describe next.

4.1 Folk Theories of Use: Juxtaposing Multiple Uses of T2I in Creative Practice

ML-artists engaged in a range of folk theorization focused on two dimensions of “use.” (see Table 3). The first dimension concerns the divergent ways T2I models are employed in creative practice as a (1) creative medium with specific properties that can be crafted and molded and as a (2) mundane tool for executing and facilitating aspects of their creative projects. The second dimension contextualizes multiple meanings of creativity. Here, they articulate distinctive theories clarifying while T2I tools (3) increase access to creative modalities, (4) true “creativity” requires rising above basic model affordances. These overlapping folk theories underscore how the ways ML-artists make sense of T2I models are contextualized by the motivations, perspectives, and goals of the specific user.

4.1.1 Use Folk Theory 1: T2I Models as Artistic Medium.

The first folk theory of use is: T2I models are an artistic medium with information-rich properties. This theory reflects how ML-artists perceive T2I models as embodying specific material characteristics, including errors, glitches, distortions, and imagined universal or normative representations that they manipulate in their practice.

They drew direct comparisons between T2I models and established mediums, such as sculpture and photography, noting all mediums must be explored, understood, and questioned to make sense of their properties. The ML-artists emphasized how “the unexpected outputs are great...that is what makes it interesting” (Breakout Group 1) and described the ability to explore “unintended,” “weird,” and “unanticipated” outputs of T2I models as what makes them a “creatively interesting” medium (P15, U.K.). They emphasized how they interpret and study model failures, glitches, and bugs, framing these as a source of creative inspiration with expressive value. For example, P5, from the U.K., described:

“I’ve been creating a video series [...] looking at ‘journeys’ as a destination and the unstable nature of our world or the lack of permanence. [I used] a video dataset and [applied] some GAN [...] Often the resulting data is super weird, blurry, smudgy [and I think a] more authentic representation of our physical experience than clear film. When you watch things on film it’s very crisp. It’s clear. It’s very defined. [But] when you experience something, often it is not [clear]. It’s much more of a fluid understanding of life.”

In this example, although the GAN model fails to generate high-resolution videos, this technical failure is perceived as better delivering the artist’s message about “the unstable nature of the world.”

Table 3:
Dimension of UseFolk TheoryIllustrative Quote
Employing T2I in creative practiceT2I models are an artistic medium“I approach it as what’s the concept I want to create and what medium or way of producing is most suitable for what I want to talk about." (P5, U.K.)
T2I models are a mundane tool“It’s interesting — for T2I models to act [as a means] to avoid the chance of lost in translation scenarios." (P8, India)
Implications of T2I on creativityT2I models expand access to and forms of creative expression“I’m really looking into ways in which these tools can best be used to recreate, traditional storytelling experiences." (P11, Kenya)
True creativity involves rising above basic T2I model affordances“What are you actually bringing that’s [a] more inventive take on the medium than just the output." (P10, Zimbabwe)

Table 3: Four ML-artist folk theories about “using T2I models" in creative practice.

As an artistic medium, ML-artists recognize T2I model outputs as reflecting imagined “universal concepts” (P8, India) and encoded representations of hegemonic social norms that are a “mirror to the world" (P3, France). They critically interpret this property of T2I models, orienting their discoveries against extant social inequalities, taking on a social critic role through their creative practice. P10, from Zimbabwe, described an ongoing project that “seeks to predict media and its depictions of certain groups and how this feeds into the biases of image generation models." He reflected:

“We initially work[ed] with DALL-E-2 [and] how the architects of these platforms could be more critically towards the data sources, especially from a media standpoint, and how these further reinforce certain biases.”

Similarly, P5 spoke to the “over-representation of certain people [and] under-representation of other people” in generated imagery and characterized T2I models as “literally” a “mirror towards society” that reflects extant power dynamics. They emphasized these constituent properties of the medium could open a conversation about the state of society to instantiate social change. In this folk theory, working with T2I as medium requires deep reflection and understanding of the embodied properties of T2I models when employed in creative practice.

4.1.2 Use Folk Theory 2: T2I Models are a Mundane Tool.

ML-artists also conceptualize T2I models as a mundane creative tool to prototype ideas and communicate more effectively with collaborators. This folk theory concerns using T2I models as a mechanical means of facilitating creative processes without employing T2I outputs in the final piece. Participants expressed how employing T2I models in this way facilitates or eases cumbersome aspects of their creative process, especially in early conceptual stages and to “mock-up prototypes” (see Fig 1). For instance, P6, a documentary filmmaker from Poland, described using T2I models to mood board before developing a detailed treatment (an outline of the film structure):

“I personally feel really empowered... to innovate in this space or be able to draft up ideas and concept sketches in my own studio without having to immediately find a collaborator to bring those ideas to life” (see Fig 2).

This folk theory of use imagines T2I models as low-stakes technology where its value is enabling creative tasks.

Figure 1:

Figure 1: P11’s (left) and P2’s (right) illustrations of how they use T2I for iterative brainstorming.

Figure 2:

Figure 2: P6’s illustration of how they use T2I to moodboard.

Figure 3:

Figure 3: P8’s illustration of how they use T2I to align collaborators on concepts.

Another way ML-artists consider T2I models as a mundane creative tool is in using them to facilitate communication among collaborators with different expertise. Here, T2I models “work” to render ideas legible and foster shared meaning. P8, from India, described how T2I enables him to visually prototype and communicate abstract concepts (see Fig 3), elaborating:

“[I’m] creating a VR space [...] with another 3D artist. Sometimes [...] the ideas don’t get communicated very well and we found using Midjourney or DALL-E to give some visual form [to the ideas]…helps a lot because the ideas are quite abstract…and being able to communicate them simply in language or even poorly drawn hand-drawn figures is generally not that effective.”

In this folk theorization, T2I models enable the early stages of the creative process by facilitating iterative idea generation and communication among collaborators.

4.1.3 Use Folk Theory 3: T2I Models Expand Creative Expression.

The third folk theory of use is: T2I models expand access to and forms of creative expression. All artists were excited and motivated by how T2I models expand what could be created in visualizations and who can create them. Here, they described the technical affordances of T2I models as offering something distinct transcending what P6 viewed as “the limits of representation using traditional media.” She elaborated on how her art involves creating representations of abstract and subjective human experiences:

“memories, wishes, and personal experiences...things that are sometimes incredibly hard to image because they happened in the past and there is no archival footage or documentary photography to show what that looked like.”

She views T2I models as a modality for novel creative expression. In this folk theory, ML-artists understand T2I models as enabling new forms of aesthetic representation.

This theory is connected to the democratizing creativity discourse that positions AI as an empowering tool enabling broad access to creative pursuits (e.g., [47, 85]). In particular, ML-artists called attention to how social and cultural norms discourage certain forms of creative expression in different regions of the world. Speaking to the issue of political censorship, P9 (U.S.) emphasized T2I models’ double-edge that expands and restricts “freedom of expression:”

“...we take [freedom of expression] for granted in the U.S. [...] we’re still wrestling with how T2I will enable visualization processes to be available to people who don’t feel comfortable with or practice other forms of creative production. I see that [...] as increasing access to the freedom of expression.”

ML-artists also perceive T2I models as potentially addressing differential access to art education and resourcing, an inequality some felt personally. P10, a self-taught creative from Zimbabwe, emphasized “these tools [T2I models] could bridge gaps in places where there are pronounced institutional voids in tertiary education” and enable “high-output participation in previously inaccessible industries.” This perception of T2I models as expanding access to creative expression was echoed by all, regardless of formal art education.

4.1.4 Use Folk Theory 4: True Creativity Involves Rising Above Basic T2I Affordances.

The last folk theory of use is: true creativity involves rising above basic T2I model affordances. This folk theory locates creativity within the subjective creative process through which an artist expresses their unique sociocultural perspective. P5, from the U.K., described how people can use T2I models “in a way that is quite illustrative of the technology…relying on the technology” and “its basic uses.” However, everyone emphasized an “authentic creative practice” is one that exceeds the basic affordances of the medium employed.

When using T2I models, they emphasized how creativity requires the artist to do more than prompt a model with random words without an underlying message, goal, or process. Rather, they emphasized true creativity requires learning and exerting control over T2I models as a medium, with many emphasizing the “desire to fine-tune or extensively prime a model” (P2, France). P6 (Poland), characterized this as “imprinting,” in which “an artist can ‘imprint’ their own way of seeing onto a model […] ensuring that the output is an authentic expression of [their] artistic vision.” In this way, ML-artists perceive creativity as a practice and course of action through which one innovates in expressive, interpretive, and novel ways by working outside the conventional boundaries of the medium.

Table 4:
Dimension of HarmFolk TheoryIllustrative Quote
Target of harmEngineering efforts to eliminate failures, glitches, and bugs harms T2I models as medium“The sterilization of the technology as well.…to fill a certain set of [commercial] needs...we lose the potential excitement there." (P5, U.K.)
Limiting the functionality and release of T2I models harms creative practice“Get[ting] direct contact between creative thinking and AI is where really good stuff is going to happen. We [can’t] drown this out [with guardrails]." (P7, U.K.)
Source of harmHarms from use are not deterministic but contextual and distributed unequally across contexts“Thinking as a self-taught creative...these tools could bridge gaps in places [with] pronounced institutional voids in tertiary education and... participation in previously inaccessible industries. But...[using] AI-generated images in journalism...risks reinforcing media biases in representation." (P10, Zimbabwe)

Table 4: Three ML-artist folk theories about the “harms of using T2I models" in creative practice.

This folk theory distinguishes “creativity” from its meaning within the “democratizing creativity” discourse, which asserts T2I models lower the barriers to executing creative work by making it easier, faster, and accessible to produce aesthetic outputs. P10, from Zimbabwe, explained:

“[T2I tools are] starting to democratize processes around what it means to create. So creation has to be informed by what is new and artists need to reflect on what [they are] actually bringing that’s a more inventive take on the medium than just the output which [...] is being simplified more and more each day.”

While all agreed T2I models bring in new audiences to creative pursuits and thus “democratize creativity,” they distinguished access from true creativity that arises through the dynamic interplay of artists’ sociocultural perspectives, motivations, processes, and uses of the medium. P12 (Switzerland) described their experimentation:

“My real interest is how can I take that [T2I] input and bring it somewhere unexpected. I mean like the real world, and I think it’s really cool when you generate something and then... bring it to the physical world in the form of a newspaper, a postcard, an interactive object [...] it’s where the magic happens, you take it out of where you used to it, and then, if you see it in some other context, then it becomes interesting.”

For these ML-artists, creativity requires raising the bar with regard to use, especially as T2I tools enable end-to-end automation of aesthetic production.

4.2 Folk Theories of Harm: T2I Models as a Target and Source of Harm

ML-artists hold distinctive folk theories of harm concerning the use of T2I models in creative practice that are in tension with common technology development practices (see Table 4). In this folk theory, ML-artists perceive T2I models as not a source of harm but something harmed by (1) engineering efforts to eliminate failures, bugs, and glitches and (2) product policy efforts to limit functionality and implement guardrails. In terms of T2I as a source of harm, they emphasize (3) harms from use are not deterministic but contextual and distributed unequally across contexts. This folk theorization is deeply entangled with how ML-artists conceive of T2I models as a creative medium and true creativity as arising from the situated motivations and social location of T2I users.

4.2.1 Harm Folk Theory 1: Perfecting T2I Models Harms the Artistic Medium.

The first folk theory of harm is: engineering efforts to perfect and eliminate failures, glitches, and bugs harm T2I models as creative medium. Traditionally, from an engineering and product management perspective, failures, glitches, and bugs are problems requiring intervention prior to releasing a product or fixing as soon as they are identified. As described above, ML-artists view T2I models as an artistic medium with specific properties that are a source of inspiration for creative practice. That folk theory of T2I use — viewing failure, glitches, and bugs as a property of T2I as medium (Section 4.1.1) — is a critical factor shaping how ML-artists understand T2I models as an object that can be harmed. P7, from the U.K., discussed discordance between efforts to fix models from both a technical and responsible AI perspective and keeping what is interesting about T2I models as a medium, noting: “there’s this inherent tension with pushing things [to use the model] creatively at the same time [while] constraining the models.”

This theory concerns how efforts to perfect models erode the aesthetic properties of T2I models as an artistic medium. P8, from India, perceives that efforts to limit T2I “affordances” will render models as “stencils as opposed to pencils.” Addressing model failures, glitches, and bugs is critical for certain contexts in creative practice (i.e., creating high-quality images), especially when T2I models are employed to develop polished products in the creative industry. However, this was not how the ML-artists in this study theorized the uses of T2I models. Rather, their view of them as prototyping tools and as creative material shapes their desire for the opportunity to explore, use, and leverage all properties of the T2I model, even if these are considered a “failure” in a conventional engineering context. Thus, they characterized efforts to polish and “fix” failures as eroding and harming some uses of T2I models in creative practice.

4.2.2 Harm Folk Theory 2: Limiting the Functionality and Release of T2I Models Harms Creative Practice.

A related harm folk theory is: limiting the functionality and release of these models poses harms to creative practice. Common responsible ML strategies to address representational harms focus on model-level interventions, such as implementing safety classifiers that restrict model inputs or outputs, blocklists, and training data remediation [48]. Many ML-artists acknowledged that developers limit model capabilities and public release to mitigate potential harms from their use for a general audience. Nonetheless, they perceive these conventional harm-reduction strategies as inhibiting artists from fully utilizing and leveraging the medium in their practice. P7 (U.K.) emphasized “[These models] are making stuff that human beings could not make, and that is super interesting and super important,” elaborating how limiting model capabilities “remove the actual creative potentiality from it” and “we’ve got to protect the ability for artists to remake and make new worlds that may fit uncomfortably with our own kind of definitions of what’s right and wrong.”

In this folk theory, outputs that might violate product policies are embraced as part of a creative practice that offers critical commentary on society, as opposed to a problem requiring management at the model level. Many ML-artists expressed the importance of protecting space for creating art that comments on hegemonic social norms, with P5 (U.K.) emphasizing, “a lot of art holds a mirror up to see things within our society.” Here, they problematized how power and decisions over model functionality are consolidated among model developers who control decisions to limit and release T2I models’ features and functionality. In this harm folk theory, exploring the boundaries and limits of T2I models is critical to preserving creative practice because historically critical art has intentionally represented social harms to catalyze social change.

4.2.3 Harm Folk Theory 3: Harm is Contextual, Not Deterministic.

In terms of adverse impacts on creative labor, a dominant folk theory is: harms from use are not deterministic, but contextual and distributed unequally across contexts. P10, from Zimbabwe, described how the social impacts of T2I are unevenly distributed and experienced, where “harm…in one way, results in an upside somewhere else.” Similarly, P5 (U.K.) emphasized:

“It’s important…we don’t use these broad brushstrokes,” noting “it is important to look at [any harms]... within the context of that industry or space it is being used. [...] For instance, within the film industry, the harm might be very different than in fine art.”

They also noted how effects are stratified within an industry. For instance, P6 from Poland described “in the context of filmmaking, tools like [T2I models] may result in opportunity loss for below-the-line talent,” referring to the crew involved in pre-production, production, and post-production.

This folk theory further calls for attention to mediating factors of the creative industry and viewing audience, with P5 emphasizing the potential harm from these models is “quite dependent on the context of where that final image is going to exist.” For example, using images from T2I models as part of a prototyping process raises different questions about the image’s authenticity compared to when directly using those images as the final art piece with no further transformation. Or, when it comes to representational harms, the intended use of the generated images determines whether T2I models reinforce or reveal existing biases in creative practice. As discussed in Section 4.1, many ML-artists use T2I models as a medium to create critical art where uncovering potential stereotypes perpetuated by T2I models is leveraged to deliver a message about technology and society. However, the same stereotypical representations could foster discrimination and alienation when models generate inaccurate and offensive depictions of specific cultural and social groups outside a critical art context.

ML-artists emphasized the importance of context when discussing the kinds of control artists should have over T2I models. They all emphasized the need for meaningful consent in how artist images are incorporated into training data, as P2 (France) highlighted how style transfer, where T2I models can generate images in the style of other artists leads to “dilution” of that person’s unique style and potentially give them a “bad reputation” as there are currently no good ways of controlling or effectively tracing the provenance of artistic work generated solely from these models.

They also articulated a form of non-consensual use in which models could be “weaponized" against others. P2 reflected on an incident where someone fine-tuned a T2I model on images of a fellow artist, so the model generated photorealistic images of that person. As one might expect, the person “did not like not having control over those images being out in the world depicting things that he hadn’t done.” In sum, this folk theory on the contextual nature of harm underscores ML-artists’ complicated and overlapping attitudes about T2I harms that may even sit in tension with their beliefs that model guardrails harm T2I as a medium.

4.3 Folk Theories of Harm Reduction and Responsibility

Table 5:
Dimension of Harm ReductionFolk TheoriesIllustrative Quote
Model-facing governanceTransparency enables creative practice while informing others about appropriate T2I uses“Maybe you’re accessing where English is not the common language... [can it] acknowledge there are differences in culture...the euro-American bias... that’s encoded... there are... ideological assumptions about universality." (P9, USA)
Expanding artist control over model parameters protects T2I as medium“I want to [be able to] turn off all control parameters or have them on. So whatever is in the raw can actually come out…direct contact between creative thinking and AI is where some really good stuff is going to happen." (P7, U.K.)
Re-distributing governanceHarm reduction responsibility should be distributed among artists, developers, and moderators“There’s both commercial and non-commercial entities working in this area… making sure that continues to exist and it doesn’t just become a purely commercial area with purely commercial goals." (P15, U.K.)

Table 5: ML-Artist T2I Folk Theories of Harm Reduction.

Identifying effective harm reduction practices and ascribing responsibility for operationalizing them is an ongoing conversation in the responsible AI field. ML-artists’ folk theories about harm reduction focus on protecting T2I models, in which they articulated: (1) transparency around T2I model development will enable creative practice; and (2) increasing artist control over T2I models reduces harms to creative practice. Then, in terms of responsibility for mitigating harms, ML-artists emphasized that (3) responsibility for doing harm reduction should be distributed between the artist, model developers, moderators, and distributors of the platform (see Table 5). These folk theories are connected to how ML-artists envision T2I models as an important medium and tool for creative practice necessitating its protection, with a vision of dispersing responsibility for the creation of critical art and model governance among differently situated actors.

4.3.1 Harm Reduction Folk Theory 1: Transparency in Model Development Fosters Harm Reduction.

The first folk theory of harm reduction is: transparent T2I models enable creative practice while informing others about appropriate uses of T2I models. Artists emphasized that understanding T2I model limitations (i.e., “seeing what is intentionally left out” P15, U.K.) and capabilities (i.e., “knowing [its] boundaries”) (Breakout group 3) increases their knowledge for engaging T2I as medium. Here, ML-artists articulate transparency as protective of creative practice, describing how increased transparency around model limitations could educate and enable other creators to make informed decisions about using T2I models as a tool or a medium. As P9, from the U.S., voiced:

“How can some of these systems acknowledge their deficiencies? How can they bring forward some of the inadequacies of their cultural context?”

In this folk theory, transparency provides necessary contextual information enabling users to use T2I models in creative practice.

4.3.2 Harm Reduction Folk Theory 2: Expanding Artist Control Over T2I Models Facilitates Harm Reduction.

The second, related folk theory is: expanding artist control over model parameters protect T2I as medium. Similar to the theorized role of transparency, expanded control over model parameters as a form of harm reduction is grounded in their framing of T2I models as medium. However, it is also entangled with the harm folk theory that limiting functionality harms creative practice (Section 4.2.2). P5 described how increasing artist control would make working with T2I models feel more like other mediums, emphasizing: “I want working with T2I models to feel more like working with a material to mold and shape iteratively.”

Similarly, other ML-artists emphasized the importance of control and how it would enable them to “drive compositions more precisely” and “control the output along a variety of dimensions…within guardrails (e.g., I don’t want to see spiders, porn)” (P2, France); while P7 (U.K.) desired the ability “to turn off” and adjust all model guardrails, allowing “direct contact between creative thinking and AI" where artistic creativity can flourish. This theory of harm reduction involves increased user control over fine-tuning and model guardrails, which one breakout group mused:

“Can we design these systems where there’s more causality control at the creator’s side? For example, they can train their models and the system is only a recipe to digest these representations.”

In sum, this theory of harm reduction envisions releasing control over training data and tuning practices to artist communities that can influence culturally situated representations to develop T2I models that better fit the context of creative practice.

4.3.3 Harm Reduction Folk Theory 3: Redistributing Responsibility for Harm Reduction.

The final harm reduction theory is: harm reduction responsibility should be distributed among artists, developers, and moderators. This theory reflects ML-artists’ understanding of harms from use as contextual, including extant power dynamics as well as artist intent and motivation. In terms of artist responsibility, they emphasized how “historically” creators have had the ability to create “awful” experiences with “all mediums” (P7, U.K.). Thus, they felt those making art bear responsibility for the image and its impact on the world. In terms of developer responsibility, ML-artists emphasized choices made in ML development pipelines (e.g., poor data labeling) lead to problematic representations.

“I want a model trained on accurately labeled data that incorporates my cultural reality....I dived into the LAION-5B [103] dataset... I remember spotting issues regarding the labeling of specific cultural groups in East Africa, that’s the Maasai people... There are issues regarding who is doing the data labeling." (P11, Kenya)

As well, P7 problematized that “the [model] creators and the [model] moderators are the same,” which differs from other artistic mediums. This theory critiques how much power developers hold to address model issues and asserts that if artists have the ability to “manipulate” the models (i.e., have more control over the use of T2I models), then more “responsibility shifts back to the artist” (P8, India). This theory reflects desire for broader forms of model governance to ensure “there is no one morally correct system that any one entity is being forced to [or] trying to create” (P8). In sum, ML-artists critique the current, narrow distribution of responsibility and normatively call for distributed responsibility.

Skip 5DISCUSSION Section

5 DISCUSSION

Figure 4:

Figure 4: Relationships Between T2I Folk Theories.

Through qualitative engagement with global ML-artists, this research illustrates how identifying folk theories of use, harm, and harm reduction enables a rich understanding of how situated communities experience creative ML. It illuminates how ML-artist folk theories of T2I use, harm, and harm reduction pattern each other, with theories of use shaping perceptions of harm and harm reduction, revealing how folk theories about different aspects of algorithmic systems are interlinked, with beliefs towards technology use forming a basepoint for beliefs about harm and harm reduction for this community of practice: ML-artists (see Fig 4).

Our findings provide a grounded example of how folk theorization from user communities can be in tension with the knowledge practices of technology developers. In particular, the ways ML-artists orient to T2I models as a medium to be molded through their creative practice informs their perceptions of T2I harm and harm reduction practices in this context. While they recognize T2I models embedded as a tool in power-laden creative industries and art worlds can co-produce harmful social impacts (RQ1), they simultaneously perceive T2I models as a medium harmed by engineering efforts to perfect bugs and glitches and responsible AI mitigations, such as safety classifiers and blocklists (RQ2). The ML-artists in this study recognized the necessity of these interventions if T2I models are employed outside of critical art contexts or used by general audiences (Section 4.2). Similarly, their theorization of T2I as medium and articulation of “true creativity” as rising above basic model affordances (RQ1) (Section 4.1) informs their desire for harm reduction efforts to preserve and increase artist access to T2I models by improving model transparency and redistributing responsibility for model governance beyond developers (RQ3) (Section 4.3).

Recognizing that scholars concerned with the responsible development of creative ML are asking pressing questions about harm and harm reduction, we hope this research further illustrates the value of bringing in questions of technology use. Overall, we found ML-artist folk theories of use are informed by their prior experiences working with artistic mediums and beliefs about what an authentic creative practice entails, which, in turn, informs their folk theories of harm and harm reduction. Drawing on this insight, we discuss three ways that examining folk theories across the dimensions of use, harm, and harm reduction can aid researchers and practitioners: (1) recognizing folk theories as an argument about sociotechnical problems and solutions, (2) situating user communities in broader discourses, and (3) using folk theories to inform HCI research and responsible AI policy making.

5.1 Folk Theories as an Argument About Sociotechnical Problems and Solutions

Folk theories reveal how situated communities experience algorithmic technologies in their lives and social relations [2, 108, 129]. HCI and folk theory scholar Michael Ann DeVito [27, p. 2] describes how folk theories (de)motivate engagement with an algorithmic system incorporate users’ “assessment of the risks and benefits” of that system or its constitutive features. As the ways differently situated communities of practice experience risks and benefits intersect with social hierarchies — such as gender, socioeconomic status, and nationality, among others — folk theories reflect the frictions between the values encoded in algorithmic systems and those held by communities [27].

Our study complements these insights to further illustrate how folk theorization can identify design frictions and function as a productive force for communities of practice to diagnose problems with ML systems and articulate proposed solutions. Interrogating how different communities frame ML problems and solutions can reveal much about how technologies reinforce or redistribute social power. Science and Technology Studies scholars Steve Woolgar and Dorothy Pawluch [125] describe how when different publics frame social problems, there is a tendency towards “ontological gerrymandering” through which certain aspects of an issue are framed as problematic or visible, while other parts remain unchallenged or invisible. These gerrymandering tendencies are evident, for example, in ML-artists’ folk theories about how T2I models are a medium with specific properties (Section 4.1.1), and thus efforts to fix glitches and limit functionality harm the medium (Section 4.2.2). Similarly, the theorization of T2I models as a tool for mundane tool for performing creative work (Section 4.1.2) relates to their understanding of harms of use as non-deterministic (Section 4.2.3) and thus increased model transparency will enable other users to understand appropriate uses (Section 4.3.1). In making decisions about which aspects of a problem should be (in)visible, theorizers express their desired (and here, self-interested) visions of technological futures.

Recommendation #1—Employ the folk theory lens to examine situated community attitudes about technology solutions and problems.. Researchers and responsible AI practitioners can employ the folk theory lens across topics of algorithmic use, harm, and harm reduction as a research method to understand how different communities articulate sociotechnical problems and solutions. HCI and responsible AI scholars advocate the importance of engaging communities to identify potential harms and harm reduction efforts [22, 31, 36, 88]. Folk theories are a kind of storytelling that animates modes of thinking, ideologies, and rationalities of the power relations encoded in technologies. For example, when asked to think about strategies for reducing harm from integrating T2I models in creative practice, ML-artists identified increasing transparency around how the model is created and expanding artist control and involvement in model development (Section 4.3).

Responsible AI researchers and practitioners can analyze folk theories as sociotechnical stories that render visible certain aspects of the problem and put these insights in service of equity-oriented design strategies. In the above example, while this folk theory of harm reduction is rooted in preserving T2I as medium, with reflexivity from practitioners (see: [22, p. 13]), analysis of folk theories may be actioned on in ways that improve technology development for a wide range of art communities, even those who are not proponents of creative ML. Thus, for practitioners, interrogating knowledge claims mobilized in folk theories can illuminate how different communities make normative claims about sociotechnical problems.

5.2 Situating Community Folk Theories in Broader Technology Discourses

Our analysis illuminates how the folk theorization of this community of practice is reciprocally shaped by their beliefs about how contextual power dynamics shape creativity. They often avoided hard framings of T2I harms; instead, they emphasized the non-deterministic nature of harm that requires discussion of context, artist intent, and viewing audiences. This approach to T2I harms intersects with two public discourses: democratizing creativity and commodifying creativity. The democratizing creativity discourse positions ML models as an unflinchingly empowering tool enabling broad access to creative pursuits [47, 85], while the commodifying creativity discourse emphasizes potential harmful outcomes of creative ML models through which the off-the-shelf ability to generate images will devalue and exploit creative labor [64, 74].

Our study illustrates how ML-artist folk theorization endorses aspects of both opposing discourses. In terms of democratizing creativity, ML-artists articulated how T2I models increase broad access to creative tools and fill gaps in tertiary education, two positive futures they envisioned. However, ML-artists also described the potential for job displacement and erosion of certain roles in creative industries, including entry-level and some technical positions.

Recommendation #2—Reflect on the distinctions between folk theories among different communities.. Our findings illuminate how T2I is reasoned as a medium, and given the partial ways ML-artists endorsed popular but opposing discourses through their folk theories of harm and use, an alternative analytical lens from media studies that reflects the multivalent impacts of technologies provides a nuanced interpretation of our results. Media scholar Marshall McLuhan [76] articulated four effects of any technology or medium on society, in which every technology retrieves prior practices in a new form; that it will often reverse into a form distinct from its original characteristics; that it simultaneously obsolesces a prior technology or practice; and that it intensifies, accelerates, or enhances some human action.

Our study suggests ML-artist folk theorization better articulates to McLuhan’s media laws, a framework previously employed to understand DIY and maker cultures [115]. Their folk theories characterize T2I models as retrieving direct representation, hegemonic social representations, and social power dynamics; as enhancing speed of creativity, the bar for “true creativity,” communication of abstract ideas, access to representation, job displacement, and an increase in the influence of the companies developing T2I models; as obsolescing certain forms of artistic labor and entry-level creative industry job roles; and in its worse form reversing into loss of creative identity, reinforcing existing power structures, and loss of artist control over their artwork.

5.3 Using Folk Theories to Inform HCI Research and Responsible AI Policy

While folk theorization is an analytical framework for understanding the informal knowledge users develop to explain the outcomes, effects, or consequences of algorithmic systems, it can inform responsible AI research and policy practitioners by inviting a human-centered approach to examine human control and algorithmic automation [106]. The ML-artist folk theories we found in this study distill findings illustrating linkages between normative understandings of T2I use, harm, and harm reduction. The field of responsible AI research and policy is a practice of developing AI systems that embody equitable outcomes. Important work in this field has focused on potential harms from ML systems, particularly computational harms arising from choices made in ML pipelines [105].

Recommendation #3—Examine folk theories across use, harm, and harm reduction to understand deeper relationships between these dimensions.. Our study illustrates the value of examining these in concert, as we found they build upon and inform each other. How ML-artists conceptualize normative uses of T2I models directly informs their views of potential harms and harm reduction strategies. Looking at these throughlines offers important insight into not only what sociotechnical harms they see but also what strategies are needed to address them. For example, ML-artists’ folk theory of how T2I models expand access to and forms of creative expression (Section 4.1.3) relates to their theorization that engineering efforts to eliminate failures, glitches, and bugs (Section 4.2.1) and limiting functionality (Section 4.2.2) harms T2I models for use in creative practice. The frictions between theories of use and harm connect to their beliefs about harm reduction, such as how “improving” T2I models involves providing artists with more knowledge about the medium and expanding the representations possible from T2I models, enabling them to control and work with the medium in new ways (Section 4.3). In this way, folk theories offer a useful analytic for practitioners to understand user knowledge.

Recommendation #4—Contextualize folk theory perspectives within the policy and regulatory landscape, with equity-oriented outcomes as a north star.. Our findings affirm how folk theories can be “in tension” with not only institutionalized conceptions of how systems work held by technology designers [37] but also ways to govern, manage, and control T2I models. ML-artist desire for full model control may be infeasible depending on a number of social, ethical, and regulatory contextual factors; however, more distributed governance could enable more equitable algorithms. Moreover, the normative desires of one community (e.g., ML-artists) may sit in tension with other communities (e.g., non-ML-artists). Future work exploring commonalities and distinctions in T2I folk theorization across communities could provide further insights for creating appropriate governance mechanisms.

5.4 Limitations

Although this study offers critical insights into ML-artist folk theorization, it has limitations. Our study examines the perspectives of artists who have or currently employ T2I models in their creative practice and thus reflects the situated knowledge of those who likely hold more favorable views toward T2I models. As such, this study may not capture the folk theorization of artists critical towards using T2I models, artists who do not have access to these models, or artists whose practice does not include digital art. While our recruitment through a research partner enabled us to convene global artists who live and work in different cultural contexts, it limited the initial pool of artists to those with direct connections to the organization or those who have relationships with someone with a direct connection.

Our study design also focused on understanding the perspectives and beliefs of one particular artist community and did not enable analysis across differently situated artist communities, such as (1) those who are unfamiliar with T2I models, (2) familiar but have not employed T2I models, and (3) those who have incorporated T2I models in their practice. Exploring the (dis)continuities of folk theories across differently situated communities and artistic intentions is a fruitful area for future research. Such work could potentially inform design or feature recommendations to develop T2I tools that better serve a wider range of artist communities.

Skip 6CONCLUSION Section

6 CONCLUSION

We describe how ML-artists experience T2I models in their creative practice and situate these models in the broader field of creativity through folk theorization of T2I use, harm, and harm reduction. We identified these folk theories through three workshops with ML-artists in ten countries, incorporating critical and sociotechnical perspectives on harm. Our findings indicate ML-artists’ perspectives toward T2I models are shaped by the values and motivations they bring to their creative practice and their understanding of broader social inequalities. We conclude with discussion of how folk theorization informs equity-oriented responsible AI practice and points to tangible ways to operationalize harm reduction outside of model guardrails.

Skip ACKNOWLEDGMENTS Section

ACKNOWLEDGMENTS

We thank our participants for their generosity in sharing their insight and expertise. We also thank Jason Baldridge, Fernando Diaz, Michael Madaio, Freya Salway, Renelito Delos Santos, Andrew Smart, Ashley Walker, and the anonymous reviewers for their comments that contributed to the paper’s development.

Footnotes

  1. 1 Note: Folk theories differ from mental models, which are more akin to a schema or representation of a technology [43, 58, 69] and often used in usability studies [17, 73, 111, 116] or causal analyses of concepts like satisfaction or trust [43, 116].

    Footnote
  2. 2 The research proposal, workshop protocol, recruitment material, and consent form were reviewed by experts at our institution in domains including ethics, human subjects research, policy, legal, and privacy. While the institution of the lead author does not require IRB approval, we adhere to similarly strict standards.

    Footnote
  3. 3 This research was conducted shortly after DALL-E-2 was broadly opened to the public in September 2022. We did not include “degree of familiarity with T2I tools" in our recruitment screening as they had been broadly accessible for a limited time.

    Footnote
Skip Supplemental Material Section

Supplemental Material

Video Presentation

Video Presentation

mp4

30.5 MB

References

  1. Stability AI. 2022. Stable Diffusion. https://stablediffusionweb.com/Google ScholarGoogle Scholar
  2. Jack Andersen. 2020. Understanding and Interpreting Algorithms: Toward a Hermeneutics of Algorithms. Media, Culture & Society 42, 7-8 (2020), 1479–1494. https://doi.org/10.1177/0163443720919373Google ScholarGoogle ScholarCross RefCross Ref
  3. Sofian Audrey. 2021. Art in the Age of Machine Learning. MIT Press, Cambridge, Massachusetts.Google ScholarGoogle Scholar
  4. Pesala Bandara. 2022. Artists Stage Mass Online Protest Against AI Image Generators. PetaPixel. https://petapixel.com/2022/12/19/artists-stage-mass-online-protest-against-ai-image-generators/Google ScholarGoogle Scholar
  5. Hritik Bansal, Da Yin, Masoud Monajatipoor, and Kai-Wei Chang. 2022. How Well Can Text-to-Image Generative Models Understand Ethical Natural Language Interventions?. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, 1358–1370. https://doi.org/10.18653/v1/2022.emnlp-main.88Google ScholarGoogle ScholarCross RefCross Ref
  6. Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, and Aylin Caliskan. 2023. Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1493–1504. https://doi.org/10.1145/3593013.3594095Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Charlotte Bird, Eddie Ungless, and Atoosa Kasirzadeh. 2023. Typology of Risks of Generative Text-to-Image Models. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (Montréal, QC, Canada) (AIES ’23). Association for Computing Machinery, New York, NY, USA, 396–410. https://doi.org/10.1145/3600211.3604722Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Abeba Birhane, Elayne Ruane, Thomas Laurent, Matthew S. Brown, Johnathan Flowers, Anthony Ventresque, and Christopher L. Dancy. 2022. The Forgotten Margins of AI Ethics. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 948–958. https://doi.org/10.1145/3531146.3533157Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Su Lin Blodgett, Q. Vera Liao, Alexandra Olteanu, Rada Mihalcea, Michael Muller, Morgan Klaus Scheuerman, Chenhao Tan, and Qian Yang. 2022. Responsible Language Technologies: Foreseeing and Mitigating Harms. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI EA ’22). Association for Computing Machinery, New York, NY, USA, Article 152, 3 pages. https://doi.org/10.1145/3491101.3516502Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Margarita Boyarskaya, Alexandra Olteanu, and Kate Crawford. 2020. Overcoming Failures of Imagination in AI Infused System Development and Deployment. arxiv:2011.13416 [cs.CY]Google ScholarGoogle Scholar
  11. Virginia Braun and Victoria Clarke. 2019. Reflecting on Reflexive Thematic Analysis. Qualitative Research in Sport, Exercise and Health 11, 4 (2019), 589–597. https://doi.org/10.1080/2159676X.2019.1628806Google ScholarGoogle ScholarCross RefCross Ref
  12. Virginia Braun and Victoria Clarke. 2020. One Size Fits All? What Counts as Quality Practice in (Reflexive) Thematic Analysis?Qualitative Research in Psychology 18, 3 (2020), 1–25. https://doi.org/10.1080/14780887.2020.1769238Google ScholarGoogle ScholarCross RefCross Ref
  13. Virginia Braun and Victoria Clarke. 2022. Thematic Analysis: A Practical Guide. Sage, Thousand Oaks, CA.Google ScholarGoogle ScholarCross RefCross Ref
  14. Baptiste Caramiaux and Marco Donnarumma. 2021. Artificial Intelligence in Music and Performance: A Subjective Art-Research Inquiry. In Handbook of Artificial Intelligence for Music. Springer, New York, NY, 75–95.Google ScholarGoogle Scholar
  15. Baptiste Caramiaux and Sarah Fdili Alaoui. 2022. "Explorers of Unknown Planets": Practices and Politics of Artificial Intelligence in Visual Arts. In Proc. ACM Hum.-Comput. Interact., Vol. 6. Association for Computing Machinery, New York, NY, USA, Article 477, 24 pages. https://doi.org/10.1145/3555578Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Minsuk Chang, Stefania Druga, Alexander J. Fiannaca, Pedro Vergani, Chinmay Kulkarni, Carrie J. Cai, and Michael Terry. 2023. The Prompt Artists. In Proceedings of the 15th Conference on Creativity and Cognition (Virtual Event, USA) (C&C ’23). Association for Computing Machinery, New York, NY, USA, 75–87. https://doi.org/10.1145/3591196.3593515Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Janghee Cho. 2018. Mental Models and Home Virtual Assistants (HVAs). In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (Montréal, QC, Canada) (CHI EA ’18). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3170427.3180286Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Michael Chui, Eric Hazan, Roger Roberts, Alex Singla, Kate Smaje, Alex Sukharevsky, Lareina Yee, and Rodney Zemmel. 2023. The Economic Potential of Generative AI: The Next Productivity Frontier. McKinsey & Company. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontierGoogle ScholarGoogle Scholar
  19. Paul Cohen. 2016. Harold Cohen and AARON. AI Magazine 37, 4 (2016), 63–66. https://doi.org/10.1609/aimag.v37i4.2695Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Samantha Cole. 2023. ’I Don’t Believe You:’ Artist Banned from r/Art Because Mods Thought They Used AI. Vice. https://www.vice.com/en/article/y3p9yg/artist-banned-from-art-redditGoogle ScholarGoogle Scholar
  21. European Commission. 2021. Regulation of the European Parliament. Official Journal of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206Google ScholarGoogle Scholar
  22. Ned Cooper, Tiffanie Horne, Gillian R. Hayes, Courtney Heldreth, Michal Lahav, Jess Holbrook, and Lauren Wilcox. 2022. A Systematic Review and Thematic Analysis of Community-Collaborative Approaches to Computing Research. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 73, 18 pages. https://doi.org/10.1145/3491102.3517716Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Kelley Cotter. 2019. Playing the Visibility Game: How Digital Influencers and Algorithms Negotiate Influence on Instagram. New Media & Society 21, 4 (2019), 895–913. https://doi.org/10.1177/1461444818815684Google ScholarGoogle ScholarCross RefCross Ref
  24. Christopher A. Le Dantec and Carl DiSalvo. 2013. Infrastructuring and the Formation of Publics in Participatory Design. Social Studies of Science 43, 2 (2013), 241–264. https://doi.org/10.1177/0306312712471581Google ScholarGoogle ScholarCross RefCross Ref
  25. Thomas H. Davenport and Nitin Mittal. 2022. How Generative AI is Changing Creative Work. Harvard Business Review. https://hbr.org/2022/11/how-generative-ai-is-changing-creative-workGoogle ScholarGoogle Scholar
  26. Michael Ann DeVito. 2021. Adaptive Folk Theorization as a Path to Algorithmic Literacy on Changing Platforms. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 339 (Oct 2021), 38 pages. https://doi.org/10.1145/3476080Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Michael Ann DeVito. 2022. How Transfeminine TikTok Creators Navigate the Algorithmic Trap of Visibility Via Folk Theorization. Proc. ACM Hum.-Comput. Interact. 6, CSCW2, Article 380 (Nov 2022), 31 pages. https://doi.org/10.1145/3555105Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Michael A. DeVito, Jeremy Birnholtz, Jeffery T. Hancock, Megan French, and Sunny Liu. 2018. How People Form Folk Theories of Social Media Feeds and What It Means for How We Study Self-Presentation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montréal, QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3173694Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Michael A. DeVito, Darren Gergle, and Jeremy Birnholtz. 2017. "Algorithms Ruin Everything": #RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 3163–3174. https://doi.org/10.1145/3025453.3025659Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Michael A. DeVito, Jeffrey T. Hancock, Megan French, Jeremy Birnholtz, Judd Antin, Karrie Karahalios, Stephanie Tong, and Irina Shklovski. 2018. The Algorithm and the User: How Can HCI Use Lay Understandings of Algorithmic Systems?. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (Montréal, QC, Canada) (CHI EA ’18). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3170427.3186320Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Catherine D’Ignazio and Lauren F. Klein. 2020. Data Feminism. MIT Press, Cambridge, MA.Google ScholarGoogle Scholar
  32. Marco Donnarumma, Wesley Goatley, and Helena Nikonole. 2023. Critical Art and the Ethics of AI. Cryptpad. https://cryptpad.fr/pad/#/2/pad/view/H44naOgAhHBdcF2vb2HDKtCpWs0hV2sHML8yMKIp9I0/Google ScholarGoogle Scholar
  33. Benj Edwards. 2022. Flooded with AI-generated Images, Some Art Communities Ban Them Completely. Ars Technica. https://arstechnica.com/information-technology/2022/09/flooded-with-ai-generated-images-some-art-communities-ban-them-completely/Google ScholarGoogle Scholar
  34. Benj Edwards. 2022. Have AI Image Generators Assimilated Your Art? New Tool Lets You Check. Ars Technica. https://arstechnica.com/information-technology/2022/09/have-ai-image-generators-assimilated-your-art-new-tool-lets-you-check/Google ScholarGoogle Scholar
  35. AICAN + Ahmed Elgammal. 2019. Faceless Portraits Transcending Time. https://uploads.strikinglycdn.com/files/3e2cdfa0-8b8f-44ea-a6ca-d12f123e3b0c/AICAN-HG-Catalogue-web.pdfGoogle ScholarGoogle Scholar
  36. Sheena Erete, Yolanda Rankin, and Jakita Thomas. 2023. A Method to the Madness: Applying an Intersectional Analysis of Structural Oppression and Power in HCI and Design. ACM Trans. Comput.-Hum. Interact. 30, 2, Article 24 (Apr 2023), 45 pages. https://doi.org/10.1145/3507695Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Motahhare Eslami, Karrie Karahalios, Christian Sandvig, Kristen Vaccaro, Aimee Rickman, Kevin Hamilton, and Alex Kirlik. 2016. First I "Like" It, Then I Hide It: Folk Theories of Social Feeds. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 2371–2382. https://doi.org/10.1145/2858036.2858494Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Motahhare Eslami, Kristen Vaccaro, Min Kyung Lee, Amit Elazari Bar On, Eric Gilbert, and Karrie Karahalios. 2019. User Attitudes towards Algorithmic Opacity and Transparency in Online Reviewing Platforms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300724Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Sarah Fdili Alaoui. 2019. Making an Interactive Dance Piece: Tensions in Integrating Technology in Art. In Proceedings of the 2019 on Designing Interactive Systems Conference (San Diego, CA, USA) (DIS ’19). Association for Computing Machinery, New York, NY, USA, 1195–1208. https://doi.org/10.1145/3322276.3322289Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Rebecca Fiebrink and Laetitia Sonami. 2020. Reflections on Eight Years of Instrument Creation with Machine Learning. In International Conference on New Interfaces for Musical Expression (NIME). International Conference on New Interfaces for Musical Expression, Birmingham, U.K., 1–6. https://nime2020.bcu.ac.uk/Google ScholarGoogle Scholar
  41. World Economic Forum. 2018. Creative Disruption: The Impact of Emerging Technologies on the Creative Economy. Technical Report. World Economic Forum. https://www3.weforum.org/docs/39655_CREATIVE-DISRUPTION.pdfGoogle ScholarGoogle Scholar
  42. Vinitha Gadiraju, Shaun Kane, Sunipa Dev, Alex Taylor, Ding Wang, Emily Denton, and Robin Brewer. 2023. "I Wouldn’t Say Offensive But...": Disability-Centered Perspectives on Large Language Models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 205–216. https://doi.org/10.1145/3593013.3593989Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Katy Ilonka Gero, Zahra Ashktorab, Casey Dugan, Qian Pan, James Johnson, Werner Geyer, Maria Ruiz, Sarah Miller, David R. Millen, Murray Campbell, Sadhana Kumaravel, and Wei Zhang. 2020. Mental Models of AI Agents in a Cooperative Game Setting. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA,) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376316Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Advances in Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger (Eds.). Vol. 27. Curran Associates, Inc., Montréal, QC, Canada. https://proceedings.neurips.cc/paper_files/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdfGoogle ScholarGoogle Scholar
  45. Gabriel Grill and Nazanin Andalibi. 2022. Attitudes and Folk Theories of Data Subjects on Transparency and Accuracy in Emotion Recognition. Proc. ACM Hum.-Comput. Interact. 6, CSCW1, Article 78 (Apr 2022), 35 pages. https://doi.org/10.1145/3512925Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Zoe Guy. 2023. AI-Generated Art Is Not Copyrightable, Judge Rules. Vulture. https://www.vulture.com/2023/08/ai-art-copyright-ineligible.htmlGoogle ScholarGoogle Scholar
  47. Matthew Guzdial and Mark Riedl. 2019. An Interaction Framework for Studying Co-Creative AI. arxiv:1903.09709 [cs.HC]Google ScholarGoogle Scholar
  48. Susan Hao, Piyush Kumar, Sarah Laszlo, Shivani Poddar, Bhaktipriya Radharapu, and Renee Shelby. 2023. Safety and Fairness for Content Moderation in Generative Models. arxiv:2306.06135 [cs.LG]Google ScholarGoogle Scholar
  49. Christina N. Harrington, Katya Borgos-Rodriguez, and Anne Marie Piper. 2019. Engaging Low-Income African American Older Adults in Health Discussions through Community-Based Design Workshops. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3290605.3300823Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Susan Hekman. 1997. Truth and Method: Feminist Standpoint Theory Revisited. Signs: Journal of Women in Culture and Society 22, 2 (1997), 341–365. https://doi.org/10.1086/495159Google ScholarGoogle ScholarCross RefCross Ref
  51. Drew Hemment, Morgan Currie, SJ Bennett, Jake Elwes, Anna Ridler, Caroline Sinders, Matjaz Vidmar, Robin Hill, and Holly Warner. 2023. AI in the Public Eye: Investigating Public AI Literacy Through AI Art. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 931–942. https://doi.org/10.1145/3593013.3594052Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Lynn Hershman. 1998. Agent Ruby. LynnHershman.com. https://www.lynnhershman.com/agent-ruby/Google ScholarGoogle Scholar
  53. Aaron Hertzmann. 2019. Aesthetics of Neural Network Art. arxiv:1903.05696 [cs.AI]Google ScholarGoogle Scholar
  54. The White House. 2023. Blueprint for an AI Bill of Rights. whitehouse.gov. https://www.whitehouse.gov/ostp/ai-bill-of-rights/Google ScholarGoogle Scholar
  55. Barbican Immersive. 2022. AI: More Than Human. Barbican Immersive. https://www.barbican.org.uk/sites/default/files/documents/2022-09/BI%20AI%20More%20than%20Human%20-%20Presentation%20%28tour%29%2030.09.2022.pdfGoogle ScholarGoogle Scholar
  56. Harry H. Jiang, Lauren Brown, Jessica Cheng, Mehtab Khan, Abhishek Gupta, Deja Workman, Alex Hanna, Johnathan Flowers, and Timnit Gebru. 2023. AI Art and Its Impact on Artists. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (Montréal, QC, Canada) (AIES ’23). Association for Computing Machinery, New York, NY, USA, 363–374. https://doi.org/10.1145/3600211.3604681Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Johannezz. 2022. The Promptist Manifesto. Kotaku. https://deeplearn.art/the-promptist-manifestoGoogle ScholarGoogle Scholar
  58. Philip Nicholas Johnson-Laird. 1983. Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Harvard University Press, Cambridge, MA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Leslie Jones. 2023. Coded: Art Enters the Computer Age, 1952-1982. LACMA, Los Angeles, CA.Google ScholarGoogle Scholar
  60. Théo Jourdan and Baptiste Caramiaux. 2023. Culture and Politics of Machine Learning in NIME: A Preliminary Qualitative Inquiry. In New Interfaces for Musical Expression (NIME). International Conference on New Interfaces for Musical Expression, Mexico, Mexico, 1–8. https://hal.science/hal-04075438Google ScholarGoogle Scholar
  61. Shivani Kapania, Oliver Siy, Gabe Clapper, Azhagu Meena SP, and Nithya Sambasivan. 2022. ”Because AI is 100% Right and Safe”: User Attitudes and Sources of AI Authority in India. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 158, 18 pages. https://doi.org/10.1145/3491102.3517533Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Nadia Karizat, Dan Delmonaco, Motahhare Eslami, and Nazanin Andalibi. 2021. Algorithmic Folk Theories and Identity: How TikTok Users Co-Produce Knowledge of Identity and Engage in Algorithmic Resistance. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 305 (Oct 2021), 44 pages. https://doi.org/10.1145/3476046Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Michael Katell, Meg Young, Dharma Dailey, Bernease Herman, Vivian Guetler, Aaron Tam, Corinne Bintz, Daniella Raz, and P. M. Krafft. 2020. Toward Situated Interventions for Algorithmic Equity: Lessons from the Field. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 45–55. https://doi.org/10.1145/3351095.3372874Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Kevin Kelly. 2022. Picture Limitless Creativity at Your Fingertips. Wired. https://www.wired.com/story/picture-limitless-creativity-ai-image-generators/Google ScholarGoogle Scholar
  65. Willett Kempton. 1986. Two Theories of Home Heat Control. Cognitive Science 10, 1 (1986), 75–90.Google ScholarGoogle ScholarCross RefCross Ref
  66. Leo Kim. 2022. Korean Illustrator Kim Jung Gi’s ‘Resurrection’ via AI Image Generator Is Orientalism in New Clothing. ARTnews. https://www.artnews.com/art-news/news/kim-jung-gi-death-stable-diffusion-artificial-intelligence-1234649787/Google ScholarGoogle Scholar
  67. Erin Klawitter and Eszter Hargittai. 2018. “It’s Like Learning a Whole Other Language”: The Role of Algorithmic Skills in the Curation of Creative Goods. International Journal of Communication 12 (2018), 3490–3510. https://www.zora.uzh.ch/id/eprint/168021/Google ScholarGoogle Scholar
  68. Karin D Knorr-Cetina. 1981. The Manufacture of Knowledge: An Essay on the Constructivist and Contextual Nature of Science. Pergamon, New York, NY.Google ScholarGoogle Scholar
  69. Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. 2012. Tell Me More? The Effects of Mental Model Soundness on Personalizing an Intelligent Agent. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Austin, Texas, USA) (CHI ’12). Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/2207676.2207678Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Christopher A. Le Dantec and Sarah Fox. 2015. Strangers at the Gate: Gaining Access, Building Rapport, and Co-Constructing Community-Based Research. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (Vancouver, BC, Canada) (CSCW ’15). Association for Computing Machinery, New York, NY, USA, 1348–1358. https://doi.org/10.1145/2675133.2675147Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Golan Levin and Tega Brain. 2021. Code as Creative Medium: A Handbook for Computational Art and Design. MIT Press, Cambridge, MA.Google ScholarGoogle Scholar
  72. Tony Liao and Olivia Tyson. 2021. “Crystal is Creepy, but Cool”: Mapping Folk Theories and Responses to Automated Personality Recognition Algorithms. Social Media + Society 7, 2 (2021), 1–11. https://doi.org/10.1177/20563051211010170Google ScholarGoogle ScholarCross RefCross Ref
  73. Sebastian Linxen, Silvia Heinz, Livia J. Müller, Alexandre N. Tuch, and Klaus Opwis. 2014. Mental Models for Web Objects in Different Cultural Settings. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems (Toronto, Ontario, Canada) (CHI EA ’14). Association for Computing Machinery, New York, NY, USA, 2557–2562. https://doi.org/10.1145/2559206.2581209Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Robin Mansell. 2021. Enclosing or Democratising the AI Artwork World. Cambridge Journal of Law, Politics, and Art 1 (2021), 247–251. http://eprints.lse.ac.uk/id/eprint/110814Google ScholarGoogle Scholar
  75. Jon McCormack, Camilo Cruz Gambardella, Nina Rajcic, Stephen James Krol, Maria Teresa Llano, and Meng Yang. 2023. Is Writing Prompts Really Making Art?. In Artificial Intelligence in Music, Sound, Art and Design, Colin Johnson, Nereida Rodríguez-Fernández, and Sérgio M. Rebelo (Eds.). Springer Nature Switzerland, Cham, 196–211.Google ScholarGoogle Scholar
  76. Marshall McLuhan. 1975. McLuhan’s Laws of the Media. Technology and Culture 16, 1 (1975), 74–78. http://www.jstor.org/stable/3102368Google ScholarGoogle ScholarCross RefCross Ref
  77. Midjourney. 2022. Midjourney. https://www.midjourney.com/Google ScholarGoogle Scholar
  78. Brendan Paul Murphy. 2023. Is There a Way to Pay Content Creators Whose Work is Used to Train AI? Yes, But It’s Not Foolproof. The Conversation. https://theconversation.com/is-there-a-way-to-pay-content-creators-whose-work-is-used-to-train-ai-yes-but-its-not-foolproof-199882Google ScholarGoogle Scholar
  79. Thao Ngo and Nicole Krämer. 2022. Exploring Folk Theories of Algorithmic News Curation for Explainable Design. Behaviour & Information Technology 41, 15 (2022), 3346–3359. https://doi.org/10.1080/0144929X.2021.1987522Google ScholarGoogle ScholarCross RefCross Ref
  80. Beatrice Nolan. 2022. AI-Generated Art Is Not Copyrightable, Judge Rules. Business Insider. https://www.businessinsider.com/ai-image-generators-artists-copying-style-thousands-images-2022-10Google ScholarGoogle Scholar
  81. Mimi Onuoha. 2018. On Art and Technology: The Power of Creating Our Own Worlds. Knight Foundation. https://knightfoundation.org/articles/on-art-and-technology-the-power-of-creating-our-own-worlds/Google ScholarGoogle Scholar
  82. OpenAI. 2022. DALL-E 2. https://openai.com/dall-e-2Google ScholarGoogle Scholar
  83. Charlie Parker, Sam Scott, and Alistair Geddes. 2019. Snowball Sampling. SAGE Research Methods Foundations. https://doi.org/10.4135/Google ScholarGoogle Scholar
  84. Julie Perini. 2010. Art as Intervention: A Guide to Today’s Radical Art Practices. In Uses of a Whirlwind: Movement, Movements, and Contemporary Radical Currents in the United States. AK Press, Detroit, MI, 183–198.Google ScholarGoogle Scholar
  85. Roelof Pieters and Samim Winiger. 2016. CreativeAI: On the Democratisation and Escalation of Creativity. Medium. https://medium.com/@creativeai/creativeai-9d4b2346faf3Google ScholarGoogle Scholar
  86. Luke Plunkett. 2022. AI Creating ‘Art’ Is An Ethical And Copyright Nightmare. Kotaku. https://kotaku.com/ai-art-dall-e-midjourney-stable-diffusion-copyright-1849388060Google ScholarGoogle Scholar
  87. Vinodkumar Prabhakaran, Margaret Mitchell, Timnit Gebru, and Iason Gabriel. 2022. A Human Rights-Based Approach to Responsible AI. arxiv:2210.02667 [cs.AI]Google ScholarGoogle Scholar
  88. Rida Qadri, Renee Shelby, Cynthia L. Bennett, and Emily Denton. 2023. AI’s Regimes of Representation: A Community-Centered Study of Text-to-Image Models in South Asia. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 506–517. https://doi.org/10.1145/3593013.3594016Google ScholarGoogle ScholarDigital LibraryDigital Library
  89. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning Transferable Visual Models From Natural Language Supervision. In Proceedings of the 38th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 139), Marina Meila and Tong Zhang (Eds.). PMLR, Virtual Event, 8748–8763. https://proceedings.mlr.press/v139/radford21a.htmlGoogle ScholarGoogle Scholar
  90. Bogdana Rakova, Jingying Yang, Henriette Cramer, and Rumman Chowdhury. 2021. Where Responsible AI Meets Reality: Practitioner Perspectives on Enablers for Shifting Organizational Practices. Proc. ACM Hum.-Comput. Interact. 5, CSCW1, Article 7 (Apr 2021), 23 pages. https://doi.org/10.1145/3449081Google ScholarGoogle ScholarDigital LibraryDigital Library
  91. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-Shot Text-to-Image Generation. In Proceedings of the 38th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 139), Marina Meila and Tong Zhang (Eds.). PMLR, Virtual Event, 8821–8831. https://proceedings.mlr.press/v139/ramesh21a.htmlGoogle ScholarGoogle Scholar
  92. Shalaleh Rismani, Renee Shelby, Andrew Smart, Renelito Delos Santos, AJung Moon, and Negar Rostamzadeh. 2023. Beyond the ML Model: Applying Safety Engineering Frameworks to Text-to-Image Development. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (Montréal, QC, Canada) (AIES ’23). Association for Computing Machinery, New York, NY, USA, 70–83. https://doi.org/10.1145/3600211.3604685Google ScholarGoogle ScholarDigital LibraryDigital Library
  93. Adi Robertson. 2022. The US Copyright Office Says an AI Can’t Copyright Its Art. The Verge. https://www.theverge.com/2022/2/21/22944335/us-copyright-office-reject-ai-generated-art-recent-entrance-to-paradiseGoogle ScholarGoogle Scholar
  94. Kevin Roose. 2022. An A.I.-Generated Picture Won a Prize. Artists Aren’t Happy. New York Times. https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.htmlGoogle ScholarGoogle Scholar
  95. Daniela K. Rosner, Saba Kawas, Wenqi Li, Nicole Tilly, and Yi-Chen Sung. 2016. Out of Time, Out of Place: Reflections on Design Workshops as a Research Method. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (San Francisco, California, USA) (CSCW ’16). Association for Computing Machinery, New York, NY, USA, 1131–1141. https://doi.org/10.1145/2818048.2820021Google ScholarGoogle ScholarDigital LibraryDigital Library
  96. Negar Rostamzadeh, Emily Denton, and Linda Petrini. 2021. Ethics and Creativity in Computer Vision. arxiv:2112.03111 [cs.CV]Google ScholarGoogle Scholar
  97. Rob Salkowitz. 2022. AI Is Coming For Commercial Art Jobs. Can It Be Stopped?Forbes. https://www.forbes.com/sites/robsalkowitz/2022/09/16/ai-is-coming-for-commercial-art-jobs-can-it-be-stopped/Google ScholarGoogle Scholar
  98. Eryk Salvaggio. 2023. Infinite Barnacle: The AI Image and Imagination in GANs from Personal Snapshots. Leonardo 56, 6 (2023), 575–578. https://doi.org/10.1162/leon_a_02404Google ScholarGoogle ScholarCross RefCross Ref
  99. Eryk Salvaggio. 2023. Seeing Like a Dataset: Notes on AI Photography. Interactions 30, 3 (May 2023), 34–37. https://doi.org/10.1145/3587241Google ScholarGoogle ScholarDigital LibraryDigital Library
  100. Laura Savolainen. 2022. The Shadow Banning Controversy: Perceived Governance and Algorithmic Folklore. Media, Culture & Society 44, 6 (2022), 1091–1109. https://doi.org/10.1177/01634437221077174Google ScholarGoogle ScholarCross RefCross Ref
  101. Andreas Schellewald. 2022. Theorizing “Stories About Algorithms” as a Mechanism in the Formation and Maintenance of Algorithmic Imaginaries. Social Media + Society 8, 1 (2022), 1–10. https://doi.org/10.1177/20563051221077025Google ScholarGoogle ScholarCross RefCross Ref
  102. Daniel Schiff, Bogdana Rakova, Aladdin Ayesh, Anat Fanti, and Michael Lennon. 2020. Principles to Practices for Responsible AI: Closing the Gap. arxiv:2006.04707 [cs.CY]Google ScholarGoogle Scholar
  103. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. 2022. LAION-5B: An Open Large-scale Dataset for Training Next Generation Image-text Models. In Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.). Vol. 35. Curran Associates, Inc., New Orleans, LA, 25278–25294. https://proceedings.neurips.cc/paper_files/paper/2022/file/a1859debfb3b59d094f3504d5ebb6c25-Paper-Datasets_and_Benchmarks.pdfGoogle ScholarGoogle Scholar
  104. Hugo Scurto, Baptiste Caramiaux, and Frederic Bevilacqua. 2021. Prototyping Machine Learning Through Diffractive Art Practice. In Designing Interactive Systems Conference 2021 (Virtual Event, USA) (DIS ’21). Association for Computing Machinery, New York, NY, USA, 2013–2025. https://doi.org/10.1145/3461778.3462163Google ScholarGoogle ScholarDigital LibraryDigital Library
  105. Renee Shelby, Shalaleh Rismani, Kathryn Henne, AJung Moon, Negar Rostamzadeh, Paul Nicholas, N’Mah Yilla-Akbari, Jess Gallegos, Andrew Smart, Emilio Garcia, and Gurleen Virk. 2023. Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (Montréal, QC, Canada) (AIES ’23). Association for Computing Machinery, New York, NY, USA, 723–741. https://doi.org/10.1145/3600211.3604673Google ScholarGoogle ScholarDigital LibraryDigital Library
  106. Ben Shneiderman. 2022. Human-Centered AI. Oxford University Press, Oxford, U.K.Google ScholarGoogle Scholar
  107. Ignacio Siles, Edgar Gómez-Cruz, and Paola Ricaurte. 2023. Toward a Popular Theory of Algorithms. Popular Communication 21, 1 (2023), 57–70. https://doi.org/10.1080/15405702.2022.2103140Google ScholarGoogle ScholarCross RefCross Ref
  108. Ignacio Siles, Andrés Segura-Castillo, Ricardo Solís, and Mónica Sancho. 2020. Folk Theories of Algorithmic Recommendations on Spotify: Enacting Data Assemblages in the Global South. Big Data & Society 7, 1 (2020), 1–15. https://doi.org/10.1177/2053951720923377Google ScholarGoogle ScholarCross RefCross Ref
  109. Gowthami Somepalli, Vasu Singla, Micah Goldblum, Jonas Geiping, and Tom Goldstein. 2022. Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models. arxiv:2212.03860 [cs.LG]Google ScholarGoogle Scholar
  110. Nasim Sonboli, Jessie J. Smith, Florencia Cabral Berenfus, Robin Burke, and Casey Fiesler. 2021. Fairness and Transparency in Recommendation: The Users’ Perspective. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (Utrecht, Netherlands) (UMAP ’21). Association for Computing Machinery, New York, NY, USA, 274–279. https://doi.org/10.1145/3450613.3456835Google ScholarGoogle ScholarDigital LibraryDigital Library
  111. Nikita Soni, Schuyler Gleaves, Hannah Neff, Sarah Morrison-Smith, Shaghayegh Esmaeili, Ian Mayne, Sayli Bapat, Carrie Schuman, Kathryn A. Stofer, and Lisa Anthony. 2020. Adults’ and Children’s Mental Models for Gestural Interactions with Interactive Spherical Displays. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376468Google ScholarGoogle ScholarDigital LibraryDigital Library
  112. Clay Spinuzzi. 2005. The Methodology of Participatory Design. Technical Communication 52, 2 (2005), 163–174.Google ScholarGoogle Scholar
  113. Emily L. Spratt. 2018. Creation, Curation, and Classification: Mario Klingemann and Emily L. Spratt in Conversation. XRDS 24, 3 (Apr 2018), 34–43. https://doi.org/10.1145/3186677Google ScholarGoogle ScholarDigital LibraryDigital Library
  114. Luke Stark and Kate Crawford. 2019. The Work of Art in the Age of Artificial Intelligence: What Artists Can Teach Us About the Ethics of Data Practice. Surveillance & Society 17, 3/4 (2019), 442–455. https://doi.org/10.24908/ss.v17i3/4.10821Google ScholarGoogle ScholarCross RefCross Ref
  115. Theresa Jean Tanenbaum, Amanda M. Williams, Audrey Desjardins, and Karen Tanenbaum. 2013. Democratizing Technology: Pleasure, Utility and Expressiveness in DIY and Maker Practice. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Paris, France) (CHI ’13). Association for Computing Machinery, New York, NY, USA, 2603–2612. https://doi.org/10.1145/2470654.2481360Google ScholarGoogle ScholarDigital LibraryDigital Library
  116. Paul Thomas, Bodo Billerbeck, Nick Craswell, and Ryen W. White. 2019. Investigating Searchers’ Mental Models to Inform Search Explanations. ACM Trans. Inf. Syst. 38, 1, Article 10 (Dec 2019), 25 pages. https://doi.org/10.1145/3371390Google ScholarGoogle ScholarDigital LibraryDigital Library
  117. Benjamin Toff and Rasmus Kleis Nielsen. 2018. “I Just Google It”: Folk Theories of Distributed Discovery. Journal of Communication 68, 3 (2018), 636–657. https://doi.org/10.1093/joc/jqy009Google ScholarGoogle ScholarCross RefCross Ref
  118. Nenad Tomasev, Jonathan Leader Maynard, and Iason Gabriel. 2022. Manifestations of Xenophobia in AI Systems. arxiv:2212.07877 [cs.CY]Google ScholarGoogle Scholar
  119. Henriikka Vartiainen and Matti Tedre. 2023. Using Artificial Intelligence in Craft Education: Crafting With Text-to-Image Generative Models. Digital Creativity 34, 1 (2023), 1–21. https://doi.org/10.1080/14626268.2023.2174557Google ScholarGoogle ScholarCross RefCross Ref
  120. James Vincent. 2018. How Three French Students Used Borrowed Code to Put the First AI Portrait in Christie’s. The Verge. https://www.theverge.com/2018/10/23/18013190/ai-art-portrait-auction-christies-belamy-obvious-robbie-barrat-gansGoogle ScholarGoogle Scholar
  121. Johanna Walker, Gefion Thuermer, Julian Vicens, and Elena Simperl. 2023. AI Art and Misinformation: Approaches and Strategies for Media Literacy and Fact Checking. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (Montréal, QC, Canada) (AIES ’23). Association for Computing Machinery, New York, NY, USA, 26–37. https://doi.org/10.1145/3600211.3604715Google ScholarGoogle ScholarDigital LibraryDigital Library
  122. Richard Wallace. 1995. A.L.I.C.E. (Artificial Linguistic Internet Computer Entity). A.L.I.C.E. A.I Foundation. https://www.chatbots.org/chatbot/a.l.i.c.erGoogle ScholarGoogle Scholar
  123. Etienne Wenger. 1999. Communities of Practice: Learning, Meaning, and Identity. Cambridge University Press, Cambridge, MA.Google ScholarGoogle Scholar
  124. Mitchell Whitelaw. 2004. Metacreation: Art and Artificial Life. MIT Press, Cambridge, Massachusetts.Google ScholarGoogle ScholarDigital LibraryDigital Library
  125. Steve Woolgar and Dorothy Pawluch. 1985. Ontological Gerrymandering: The Anatomy of Social Problems Explanations. Social Problems 32, 3 (1985), 214–227. https://doi.org/10.2307/800680Google ScholarGoogle ScholarCross RefCross Ref
  126. Chloe Xiang. 2022. Artists Are Revolting Against AI Art on ArtStation. Vice. https://www.vice.com/en/article/ake9me/artists-are-revolt-against-ai-art-on-artstationGoogle ScholarGoogle Scholar
  127. Shuntaro Yoshida and Natsumi Fukasawa. 2022. How Artificial Intelligence Can Shape Choreography: The Significance of Techno-performance. Performance Paradigm17 (2022), 67–86.Google ScholarGoogle Scholar
  128. Rachel Young, Volha Kananovich, and Brett G. Johnson. 2023. Young Adults’ Folk Theories of How Social Media Harms Its Users. Mass Communication and Society 26, 1 (2023), 23–46. https://doi.org/10.1080/15205436.2021.1970186Google ScholarGoogle ScholarCross RefCross Ref
  129. Brita Ytre-Arne and Hallvard Moe. 2021. Folk Theories of Algorithms: Understanding Digital Irritation. Media, Culture & Society 43, 5 (2021), 807–824. https://doi.org/10.1177/0163443720972314Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Generative AI in Creative Practice: ML-Artist Folk Theories of T2I Use, Harm, and Harm-Reduction

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems
      May 2024
      18961 pages
      ISBN:9798400703300
      DOI:10.1145/3613904

      Copyright © 2024 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 May 2024

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate6,199of26,314submissions,24%

      Upcoming Conference

      CHI PLAY '24
      The Annual Symposium on Computer-Human Interaction in Play
      October 14 - 17, 2024
      Tampere , Finland
    • Article Metrics

      • Downloads (Last 12 months)389
      • Downloads (Last 6 weeks)382

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format