What Is the Problem with Misinformation? Fact-checking as a Sociotechnical and Problem-Solving Practice

ABSTRACT Misinformation is a complex and global problem of social and technical dimensions. It is a problem that is exacerbated and sought to be solved by using diverse technologies. It is also a problem that flourishes on platforms and can lead to partnerships with platform companies. These sociotechnical dimensions of misinformation as a problem involve different actors. Some actors create or contribute to the problem, while others perceive it as their problem to solve and work to address it. Identifying the problem of misinformation is at the heart of the issue of problem-solving in fact-checking, as different actors have interests in how problems are discursively presented. This article draws on an international interview study conducted throughout 2020–2022 with 46 fact-checking actors (21 fact-checkers, 14 journalists, and 11 newsroom managers). This article analyzes how these actors reflect on “misinformation problems,” and how these problems become “fact-checking problems” for the actors to work with and solve. Ultimately, the article argues that fact-checking must be approached as a sociotechnical and problem-solving-oriented practice. Doing so highlights specific obstacles in information distribution and platform affordances.

key roles concerning "solving" misinformation problems.First, actors involved in solving the misinformation problems consider why problems occur (technology as an enabler of problems).Second, actors consider how problems can be solved (technology as an enabler of problem-solving).Third, actors consider what a problem is when technology cannot provide a solution (problems dismissed as non-problems when technology cannot be used).
These sociotechnical dimensions of misinformation as a problem involve different actors with some creating or contributing to the problem, and with others working to address it.Policymakers, governance bodies, non-governmental organizations (NGOs), journalists, fact-checking organizations, educators, and researchers contribute to efforts to solve the problem of misinformation.Platform companies facilitate misinformation via sociotechnical infrastructures for publishing and distributing misinformation; simultaneously, they are engaged in initiatives to fight misinformation.For instance, these actors support factchecking organizations, enroll users to flag content, promote media literacy among users, and engage in active content moderation (Lien, Lee, and Tandoc 2022).Understanding the misinformation problems is essential for fact-checking actors to determine if they have the resources necessary for epistemic efforts (Ekström, Ramsälv, and Westlund 2022;Steensen, Kalsnes, and Westlund 2023).Fact-checking actors are situated in a complex platform environment that imposes constant constraints on their work.This article builds on 46 interviews with fact-checking actors, here defined as (1) people who directly produce fact checks as well as (2) their managers (in newsrooms and independent/NGOs), and (3) other journalists writing about fact-checking and misinformation but not producing actual factchecks (e.g., reporters who cover the misinformation beat).
The article explores how fact-checking actors reflect on "misinformation problems" and how they gravitate towards "fact-checking problems."We thus also explore how these fact-checking actors identify and prioritize problems to solve as a community of practice.The article addresses the overall question: how do fact-checking actors understand misinformation as a problem for them to solve?
The article offers insights into the nature of the field when it comes to misinformation, fact-checking practices, and the role of technology and platforms.It argues that factchecking must be approached as sociotechnical and problem-solving-oriented practices.Doing so highlights obstacles, related for instance to distribution and platform affordances.The fact-checking actors interviewed identify and articulate several problems, which in this article are synthesized into five key areas: 1. Limited affordances of digital technologies; 2. Limited agency on platform infrastructures; 3. Limited expertise and human resources; 4. Hostility towards fact-checking actors; and 5. Fact-checks fueling misinformation.

Analytical Framework
The vibrant and collaborative global fact-checking community consists of actors including news organizations, civil society organizations, academic institutions, and independent fact-checking startups.The community includes projects that are collaborations between different kinds of institutions, as well as projects attached to news media consortiums and major global news agencies (Graves and Cherubini 2016).The fact-checking movement initially focused primarily on what practitioners call political fact-checking, or identifying and checking truth claims made by politicians and other public figures.Beginning in 2016, emphasis has increasingly shifted toward debunking, or what Graves, Bélair-Gagnon, and Larsen (2023) refer to as "policing viral misinformation online."Debunking typically focuses on exposing absurd hoaxes, conspiracy theories, and fakes, as opposed to "elite political discourse".Debunking is closely linked to third-party fact-checking programs run by platform companies, who have enrolled fact-checkers to clean their platforms from misinformation.Interviewers for this study asked about fact-checking practices including political fact-checking as well as debunking.These practices present overlapping but distinct challenges and rely on different methods and tools (Graves, Bélair-Gagnon, and Larsen 2023;Westlund et al. 2022).
Fact-checkers regularly communicate within the community, sharing experiences about their fact-checking practices and what technologies to use for different types of problems.For example, International Fact-Checking Network (IFCN) members have access to Slack channels for such exchanges.The global fact-checking movement has thus developed professional standards and practices, becoming a community where actors support each other in problem-solving via experimentation, pooling resources, and information sharing.Such collaborative work is a key characteristic of what scholars refer to as a "community of practice", marked by having (inter-institutional) collaborative networks in which they seek to improve and learn through experimentation and interactions (Brookes and Waller 2022).
These exchanges of resources and experiences help fact-checkers in their shared pursuit of solving problems associated with misinformation.They cannot solve all problems on their own and have thus actively worked on partnerships with resourceful actors.IFCN members engage substantially in debunking on Facebook, incentivized and supported through their partnership with Meta (which also includes Instagram and WhatsApp).However, such partnerships take different shapes and do not exist for all platforms, limiting fact-checkers' ability to identify, track, and effectively combat disinformation on different social networks (e.g., IFCN 2022).
The partnerships with actors such as platform companies provide legitimacy and help support the work of fact-checkers (Graves and Lauer 2020).Fact-checking actors often operate within platform partnerships that provide financial flows of varying size and importance, access to sociotechnical resources such as systems for identifying potential misinformation, and algorithmic systems for distribution of fact-checks (Bélair-Gagnon et al. 2023a).The following sections will further unpack how fact-checking constitutes a sociotechnical and problem-solving practice.

Sociotechnical Practice
Adopting a sociotechnical approach entails focusing on how social actors and technologies (e.g., machines, digital infrastructures, algorithms) work together.Pickering labeled this dynamic as "mangle of practice," meaning an "evolving field of human and material agencies reciprocally engaged in a play of resistance and accommodation in which the former seeks to capture the latter" (1995,23).Research has applied such a sociotechnical approach to the study of how humans and technology interact in organizations and workplaces.Digital infrastructures and platforms, utilizing artificial intelligence (AI) and algorithms for human-to-machine and machine-to-machine communication, are continuously gaining significance.Taking a sociotechnical approach to fact-checking practice means exploring the interplay between human fact-checkers and diverse sets of technologies, concerning how (a) they relate to misinformation problems, (b) technology shapes the conditions for their work, and (c) they make priorities for their fact-checking practices.
This article draws on the four A's framework (Lewis and Westlund 2015) for sociotechnical analysis.This framework consists of (social) actors, (technological) actants, audiences, and activities (e.g., news work, media innovation, and fact-checking).Similarly to journalists and other knowledge workers, fact-checking actors rely on a range of digital communication, production, and information-handling tools to do their work.These tools range from common word processors and cloud services to specialized image verification software.The fact-checking community has developed proprietary tools to automate or facilitate specific tasks, and it relies on commercial and open-source software made available by third-party companies (Westlund et al. 2022).The nature of the factchecking actors' work-monitoring, analyzing, and intervening in mediated discourses -leaves them sensitive to the shifting sociotechnical affordances of digital platforms and networks.For example, fact-checking organizations associated with the IFCN dedicate few resources to monitoring and debunking video and audio because this is resource-demanding.Identifying deceptive videos, as opposed to textual material, is challenging and time-consuming, and YouTube has up until the present offered fact-checking actors only minimal financial or technological support, in contrast to other platforms, notably the Meta's Facebook Partnership Program (3PFC) (Bélair-Gagnon et al. 2023a).
The fact-checking process can be organized conceptually into three main stages: identification, verification, and distribution (Full Fact 2022;Graves 2018;Nakov et al. 2021).In the identification stage, fact-checking actors monitor media outlets and analyze information flows to identify and prioritize potentially false claims or fake content.The verification stage centers on verifying the accuracy of public claims, or the authenticity of images and video, against authoritative sources.Verifying also often relying on databases of previously checked material.The distribution stage entails publishing fact-checks, re-distributing these on social media platforms, and providing contextual data.A range of tools are available along these three stages, although the bulk of specialized fact-checking technologies are linked to the identification and verification stages (Westlund et al. 2022).Technologies for distribution include a set of digital and algorithmically operated sites and platforms.It is however difficult to know how these operate beyond basic manual interventions because of the lack of transparency into algorithmic decision-making processes.The distribution stage is vital for increasing exposure and attention to fact-checks, and enhancing their impact on public discourse, but it remains a challenge for many smaller and independent members of the fact-checking community (Full Fact 2020;Graves and Cherubini 2016).
The sociotechnical nature of fact-checking work surfaces in many studies that document the methods fact-checking actors use to debunk false claims or content, and to combat misinformation.Research on fact-checking methods in different media systems and socio-political contexts has highlighted the paradox, often remarked on by practitioners, that the technologies facilitating fact-checking also drive the spread of misinformation (Haigh et al. 2018(Haigh et al. , 2081)).This literature highlights the mediated nature of factchecking work, in which finding misinformation requires fact-checking actors to "embed themselves in online environments" (McClure Haughey et al. 2020) and entails "submitting to a torrent of online political communication" (Graves 2016, 106).Factchecking actors and misinformation reporters do their reporting immersed in changing online environments (McClure Haughey et al. 2020) and are therefore dependent on and vulnerable to the policies and infrastructure of platform companies (Bélair-Gagnon et al. 2023a).Conversely, fact-checking actors have carried out an organized campaign to create new data standards for platform companies-in effect restructuring the digital environment they operate in-to enable new fact-checking tools and methodologies (Graves and Anderson 2020).
Fact-checking is thus a sociotechnical practice, but much fact-checking work links to flows of information and misinformation on platforms as infrastructures mediating public communication.Signatories of the IFCN's Code of Principles are eligible as partners for Meta's worldwide 3PFC program as well as a similar effort from TikTok; such programs offer financial remuneration and/or access to technologies geared to solving problems of misinformation on their platforms.Technologies and platforms are thus important actants for fact-checking actors and their practices, and their sociotechnical affordances enable and constrain what they can do.

Problem-Solving Practice
In solving problems, stakeholders must consider their resources and actors, and how these can interact when prioritizing what problems to work with.A fact-check aims to solve the problem of determining the level of truthfulness of specific content.In other contexts, fact-checking actors understand problems more broadly, for instance devising effective and sustainable long-term strategies for counteracting misinformation.When reflecting on and making priorities for around misinformation problems, fact-checking actors are guided by norms, methods, and professional standards as well as their human and technological resources.They do so for making assessments of knowledge claims ex post publication alongside other activities such as verifying sources, locations, and media content (Graves 2016;Graves, Bélair-Gagnon, and Larsen 2023).In contrast, journalists apply norms, standards, and methods in their epistemic practices, such as standards for assessing truthfulness and balancing knowledge claims, a priori to publishing news (Ekström and Westlund 2019).
Scholars of creative problem-solving (CPS) identify distinct stages of problem finding, idea generation, and critical evaluation, emphasizing the originality and usefulness of solutions (Vernon, Hocking, and Tyler 2016).Scholars have discussed fact-finding and ideafinding as initial steps in problem-solving, with solution-finding as a subsequent step.Alternatively, the initial step involves seeking to understand the problem, followed by generating ideas and planning for action (Vernon, Hocking, and Tyler 2016).Similarly, fact-checking practices must reflect on the nature of misinformation problems, how they can identify them, and subsequently how they should work to advance solutions.
For fact-checkers, their efforts to identify misinformation problems are integral parts of the problem-solving practice, whether it is conscious or not.
The academic literature on problem-solving stresses that professionals need to work with and structure ill-defined problems so they can proceed with problem-solving efforts.Individuals often use their knowledge and past experiences, along with information cues about the problem, in their attempts to define and work with it (Vernon, Hocking, and Tyler 2016).This goes in line with research on experiential learning in organizations, focusing on how organizations take measures to learn collectively from good as well as bad experiences (see Kolb 1983).Organizations often gravitate towards problemsolving when facing problems and may risk engaging too little in reflecting on the problem itself.Thus, it is important to pay attention to the reflexivity involved when framing a problem statement.Markovitz (2020) argues one should frame the problem independently from a specific solution, thereby giving space for multiple solutions.In this article, we argue that a specific problem of misinformation will be defined and dealt with differently by diverging actors, such as fact-checking actors, platform companies, and authorities.
A problem-solving perspective can also help to reveal tensions arising when actors identify and try to "fix" a problem.The problem initially defined may not be the "right problem" to solve.That is, the intent of solving a problem based on organizations and people's worldview of what the problem is can further the problem.This is complicated in the context of fact-checking, which depends heavily on other actors, such as platform companies, with their own professional and commercial interests.It is also worth asking where the "problem-solvers" are in the sociotechnical system and how much agency and power they have in solving problems.Problem-solving can be complex and ridden with tensions.As previously discussed, this article focuses on how fact-checking actors understand misinformation as a problem for them to solve.In doing so, this study intends to advance new knowledge about the complex sociotechnical environment in which factcheckers are embedded.

Data Collection and Analysis
Because this study focuses on fact-checking actors' conceptualizations of misinformation as fact-checking problems, data collection, and analysis were conducted mainly via a grounded, qualitative approach (Corbin and Strauss 2014).The dataset include 46 international, semi-structured interviews with fact-checkers ( 21), journalists working with fact-checking ( 14), and newsroom managers (11).The research was approved by the Norwegian Agency for Shared Services in Education and Research (SIKT) and by the University of Minnesota's and University of Wisconsin's ethics boards.All informants gave their informed consent to participate in the study.Interviews averaged approximately one hour (lasting between 45 to 90 min) and were conducted between October 2020 and July 2021, as part of a four-year research project focused on misinformation, journalism, and contemporary practices around technology and platforms.We used a semi-structured interview guide, including standard questions exploring how interviewees defined misinformation and reflected on and recognized the nuances of this problem.In the interviews, we asked questions about how fact-checking actors addressed misinformation as a problem for them to solve and thus explored their discourses around fact-checking practices.We explicitly asked interviewees about their uses of digital technologies for carrying out fact-checking.Follow-up questions were adapted to each interview, aiming to unpack in-depth understandings.
Interviewees were recruited from IFCN membership lists and subsequent snowball sampling.Demographically, 29 interviewees presented as male, and 17 presented as female.Forty-one interviewees were based in North America (15) and Europe (26).Other interviewees were based in Oceania (1), Asia (2), and Africa (2).The interviews were conducted mainly by video conference technology (Zoom) due to COVID-19 restrictions.The video conference approach has benefits in terms of access to participants, time, and expenses.It also reduces the potential impact of the researcher's presence in the field during data collection.However, it can be more difficult to build rapport and register visual cues with such an approach (Heiselberg and Stępińska 2022).
The interviews were professionally transcribed and imported into the qualitative analysis software package, NVivo.Coding and analysis have proceeded in several stages, with multiple meetings of the research team.The first round of project-wide coding produced a wide-ranging dataset from which this paper draws, with several hundred multi-level codes identifying and connecting practices, problems, technologies, actors, and concepts within the contemporary fact-checking landscape.For this article, we began to identify and situate problems that fact-checking actors described explicitly and implicitly in their work.These initial problems were grouped into broad categories, including problems around content, newsroom resources and capabilities, technologies and technological developments, publics and audiences, and connected actors such as platforms and governments.
During the second stage, we inductively identified problem themes, informed by coded interviewee reflections.Using these themes, we iteratively refined codes, connections, patterns, and concepts.We also created targeted queries for data that might have been missed in the exploratory coding stage.For example, we created several NVivo keyword and word frequency queries, searching for explicit general terms such as "problem*," "issue*," and "difficult*," but also specific keywords based on developing "problem" categories (e.g., "resource*," "coordinat*," "accura*").We also re-read and re-coded the structured portions of the interviews where fact-checking actors described how they defined and approached misinformation, how they identified and verified misinformation, and what role technologies played.In this stage we assessed the emerging themes, looking for similarities and dissimilarities, striving towards a higher level of abstraction and narrowing down the number of themes to 14.We have explicitly sought to synthesize themes emerging from the overall dataset with all three types of fact-checking actors, while not seeking to differentiate themes, patterns, and scale for each of these (or by factors such as geographical base or gender).The epistemological and methodological approach in this article prioritizes the overall themes and refrain from less robust and semi-quantifiable claims associated with patterns in specific groups.
In the third stage, we employed the sociotechnical-guided framework of 4A's (Lewis and Westlund 2015) to analyze how problems intersected with diverse agents and specific activities; at this point, we constructed five key themes, merging overlapping themes to achieve a higher level of theoretical validity.Finally, we created a matrix highlighting the essence of each theme and selected illuminating quotes.

Results
I would generally say misinformation is a casual term for anything that is factually incorrect or contains inaccuracies, that's being shared anywhere, but, you know, online misinformation is what we focus on.And then disinformation I would define more specifically within that as malicious spreading or deliberate spreading of false information, false statements or images shared with a false context, where somebody is trying to misrepresent the narrative, but you have to be able to tell if there is some kind of intent there.(113) The quote above illustrates how fact-checking actors make sense of misinformation and disinformation, and how they prioritize online misinformation.This section focuses on how fact-checking actors reflect on misinformation as a problem for them to solve.The fact-checking actors discuss misinformation as a key problem they and others work with, much because they claim it is difficult to determine the intent behind disinformation.The global misinformation problem is known to make people misinformed, sometimes resulting in actions that cause harm to other people or society at large.Factchecking actors discussed connections between platforms and misinformation: Platforms aren't a thing if people aren't inside them.Of course, they do have algorithms, they do have their policies and whatever, but we are inside them, inside their mechanisms.If something is not working, maybe it's our fault too.(108) Fact-checking actors stressed numerous issues with the multidimensional problem of misinformation, especially related to platforms.A fact-checker said "I mean it's terrible that I point out a lot of problems, but I don't have lots of solutions", and continued by saying "I mean you always see their [the platforms'] transparency reports and it's like, we got 99 bazillion pieces of terrible content" (106).The multidimensionality of the misinformation problem includes that people and bots produce misinformation, such as false claims and inauthentic images, and spread misinformation across the Web and multiple social media platforms.Both bots and people having different technological affordances and varying possibilities for monitoring and analyzing misinformation.This section presents and discusses a selection of five key problems brought forward by fact-checking actors, deductively shaped into themes informed by the 4 A's.The fact-checking actors articulated a problem involving the limited affordances of digital technology.In this context, digital technology refers to platform infrastructures (especially social media) on the one hand, and a diverse set of technological systems and tools on the other.Platforms carry specific affordances, conditioned by technological possibilities and constraints, as well as what companies decide to prioritize and make available through their strategies, policies, and community rules.Platforms do not create content themselves.They act as "hosts" for content that their users produce, publish or (re-)circulate.Platform affordances refer to how platform companies have decided to digitally design the algorithms and logic of their platforms.Ultimately, the platform companies are in control of what is enabled and possible to do on their platforms, and of disabling certain behaviors.Although several platform companies are the wealthiest and most resourceful companies on the globe, they are still to some extent constrained in terms of human and technological resources.Platforms have resources for content moderation but largely choose how they act on the spread of misinformation (i.e., platform governance).

The Limited Affordances of Digital Technologies
Some platforms provide technologies that fact-checking actors can use for fact-checking on their platforms, whereas others do not.Meta's 3PFC program has been developed to assist fact-checking actors in their practices.Fact-checking actors were critical to the usefulness of Facebook algorithm outputs, which in their view however do little to solve the problem of identifying potential misinformation.They reflected that the list of potential misinformation generated may contain content with limited relevance.Using the tools in fact-checking practices outside of English-speaking countries was also been referred to as "not relevant for editorial purposes" (237).Fact-checking actors problematized that platforms vary in how they deal with misinformation.One noted that "YouTube should do something about its own services" (234).Platforms apply their community standards for content moderation.Another fact-checking actor said: "You get no chance to debunk this stuff on YouTube, on their original disinformation, and this is to us a really huge problem, and this is also the reason we can't fact check, and we just don't do it on video" (233).
Fact-checking actors also discussed the lack of accountability and transparency of platforms.A major problem was associated with some platforms having closed infrastructures.End-to-end encryption on mobile applications like Telegram, Signal, and WhatsApp was perceived as good for user privacy but created opportunities for actors to spread misinformation across groups in closed spaces.Moreover, applications such as Telegram have been hacked, jeopardizing privacy."What is important is that people share the disinformation from YouTube, not always through YouTube, but using messaging services, and as these messaging services, Telegram for example, or others, kind of encrypt it, it's impossible to start debunking in these massaging services," said a factchecker (233).
There are also platforms hosting semi-public spaces, such as large Facebook groups set as private where admins must accept members; interactions within these spaces are excluded from analytical technologies such as Meta's CrowdTangle.Yet platforms are improving their sociotechnical capacities for fact-checking over time: "Every time we fact check something and then flag something by a human hand, and we have humans who do it, it teaches them.It trains their bot," said a fact-checker (106).Hence fact-checking actors discussed how platforms are training their algorithmic infrastructures in dealing with misinformation.
Fact-checking actors also noted problems and limits of hardware and digital systems and tools-beyond platform affordances as reviewed above.A fact-checker said that "[Misinformation] is a very complex phenomenon, and technology alone is not able to solve it" (223).Another fact-checker said: "I think sometimes technology might have good intentions, but it's not always so well-honed it makes our lives easier" (113).Another fact-checking actor added: "There are a lot of startups that promise, you have this blockchain-powered AI, deep learning NFD, Gizmo, Widget thing that can automatically fact-check content and save it true or false, that's bullshit.That's never going to happen" (202).Overall, responses by fact-checking actors reflected a perception that current technologies associated with fact-checking have significant limitations and constraints, and thus cannot help them easily solve the problems they face.
Contemporary technologies cannot adequately sort through, with the right nuance, the sheer amount of misinformation from posts, websites, videos, and audio.For instance, available technologies for visual detection come with limitations, and thus fact-checking actors discussed having to use their human eye.A fact-checker said that some actors post images of text, possibly to evade search tools that more easily identify misinformation from text rather than visuals (144).Similarly, fact-checking actors found it problematic and time-consuming to find relevant claims and identify how such are spreading.Working with misinformation problems on some platforms can be manual work as specific tools and systems are absent.Fact-checking actors discussed that misinformation in the form of videos and audio is challenging as it is difficult and time-consuming for them to tackle this problem without sophisticated technology.There are of course connections to platforms insofar that some platforms, such as YouTube and TikTok, feature video content, including videos with misinformation.

Limited Agency on Platform Infrastructures
The policy of how to deal with certain content labeling, like you couldn't be more different between Facebook, Twitter and YouTube or TikTok for that matter.On one platform, they don't take down, they just put a label on it.On another platform they label in very general terms, but they don't take any further action.On another platform, if it violates the rules, they take it down.That gives a very different reaction to that content, and we've seen that in action.(202) Platform governance refers to the policies by which platforms operate, many of which are self-created.The platforms own, control, and moderate their platform infrastructures, and oversee what is visible and not visible.Fact-checking actors found it problematic to have such limited agency on the platform infrastructures.They reflected on how platforms, as technological infrastructures, enabled various agents to produce and spread misinformation.Numerous platforms are operating in their own ways and misinformation spreads differently.These companies have different policies for their communities and content moderation, mixing public and private settings, and with some using end-toend encryption.Fact-checking actors discussed that they struggle to navigate the platform content moderation policies, and how platform actions such as deletion of content rather than labeling can result in further misinformation flourishing (202).Relatedly, fact-checking actors have adapted by deliberately designing their disseminated factcheck products so that they can't be easily reframed and reused for misinformation circulation (e.g., overlays).
As part of their platform governance, some platform companies collaborate with thirdparty fact-checking actors.The terms of these cross-sector partnerships, in combination with the infrastructure of the platforms themselves, influence what fact-checking actors can do on and with specific platforms.For example, platforms may enforce policies defining what constitutes a legitimate target for fact-checking efforts.While having an international fact-checking program, Meta does not support the fact-checking of politicians, meaning fact-checking actors cannot flag or receive reimbursements for such fact-checks within the program.They can do so in their parallel and self-funded practices.A fact-checker said: "We do the fact check anyway, but depending on which platform it is, they may or may not take action on it" (202).
Fact-checking actors also pointed to other problems with the algorithms of platforms, such as automated flagging accuracy, and that commercial logics seemingly stand in the way of platforms changing their algorithms.A fact-checker said: "We're baffled.We're absolutely baffled.When we do a debunk, Google generally traditionally actually gives better search results to the false claims than to our debunk" (107).Reflecting on algorithmic automation underperformance, a fact-checker discussed two potential reasons to underperformance and said: "One might be that there's a business model that makes them make so much money, that they don't change the algorithm, that they don't want to, we don't know" and then continued and said "the other is definitely that they are not able to create an algorithm that is self-performing" (101).
The impact of platform policies, or their absence, depends on the nature of the content and the affordances of the platform.Fact-checking actors reflected that some platforms are associated with a lot of misinformation, yet platforms do little or nothing to curb it.Fact-checking actors found this problematic in several ways.First, some fact-checking actors discussed problems associated with fact-checking altogether such as difficulties with fact-checking content on YouTube without their support in processing the (mis-)information flows.YouTube has indicated they will introduce support for so-called media review (audiovisual content) but has yet to come up with action.A fact-checker reflected on platforms not delivering what they promised: "I guess it's a policy problem within Google and YouTube" (233).Another fact-checker discussed "What has materialized is these hour-long documentaries very smoothly produced with 2000 talking heads all spouting their own conspiracy theory and being heavily promoted through alternative channels going viral to closed networks, through social networks without fact checking" (141).Second, fact-checking actors found it problematic that pieces of misinformation can be published simultaneously on multiple social media platforms by their audiences.Discussing monitoring of extremist groups, a fact-checker said: "Some people will put theirs on Telegram.They'll post it on the Chan's, they'll post it in Facebook groups" (106).Ultimately, for fact-checking actors, producing fact-checks on some platforms does not solve the misinformation problem if the same misinformation floats on other platforms.

Limited Expertise and Human Resources
… a big team also means a lot of transfer of knowledge.So each person in this team is an expert on something, one for, I don't know, Google Street View offer reverse image search and this other person knows one or two things about archiving and databases, so there is a lot of knowledge transfer, and each one of us can profit from the knowledge of the other one.That is very good in a team with about 20-30 people.(238) Fact-checking actors stressed the importance of collaboration and knowledge exchange.This is important since fact checking entails a lot of different problems, because each claim that we check is very special, and you don't have something you can do with each fact check, a certain rule or order to check this, and this, and this, but you always have to find a way to prove a claim or to verify a claim, or to falsify.(238) Fact-checking actors can consult different sources, such as getting in touch with a source (e.g., authorities) or source materials (e.g., publicly available data) deemed legitimate.They have explicit knowledge in specific areas, such as the political situation in their country, but they can also encounter problem areas with which they are less familiar.A fact-checker reflected: "Our challenge then is to say who are the experts in these areas?"(159).Fact-checking actors continuously fact-check mediatized misinformation.Yet they reflect that it is challenging yet important to recognize that misinformation not only spreads online but also offline, where they clearly may overlook misinformation (157).Moreover, they expressed that misinformation is complicated and calls for contextual information and explanation.A fact-checker said they should " … not just debunk but explain why it matters, who it could harm, how it is harming people, how the technology companies are complicit" (106).
Fact-checking actors face organizational problems in terms of scarce economic, technological, and human resources.Problems include finding qualified fact-checking actors, which relates to fact-checking courses being provided by industry actors and associations, but far from all journalism schools teach extensive verification methods used in factchecking.Having access to relevant technologies cannot be taken for granted (this may require cross-sector partnerships), and having expertise to use the technologies requires specialist expertise (Bélair-Gagnon et al. 2023a).Fact-checking actors discussed how their contacts in the fact-checking community (e.g., IFCN) help them deal with problems they cannot deal with on their own.They discussed the importance of collaborating with other fact-checking organizations when it came to grappling with legal aspects of fact-checking practice.Their fact checks have a significant impact on those who produced the materials so fact-checkers want to make sure they get their facts right and avoid risks of being sued.A fact-checker argued that "if you write fact-checks for Facebook, it's possible that users get banned," and thus it is critical to have "bulletproof fact-checks" (238).
Fact-checking actors also discussed the need for legal resources concerning developing technologies for automated detection.They discussed examples of emerging artificial intelligence (AI) technology used to identify and flag misinformation.When such tech flags something as misinformation when it is not, it has resulted in legal threats.From the fact-checking actors' perspective, having actors with legal expertise in the organization, or associated with it, is central to their activities.Moreover, fact-checking actors choose official sources such as authorities, with databases, documents, and court cases, which can be seen as pre-justified and trusted sources.Sometimes fact-checking actors need access to legal databases to produce fact-checks, and then legal expertise can be needed.Fact-checking actors discuss how their practices intersect with technology developments, such as audio forensics algorithms involving forensics experts and law enforcement representatives.
Different countries and regions have diverse politics, culture, and media systems.These call for specialized expertise, and what we here refer to as translators of local knowledge and language.Smaller countries speaking a minority language cannot always make use of technologies and data repositories used internationally (Bélair-Gagnon et al. 2023a).As a fact-checker discussed, [y]ou have to have a high degree of knowledge of the day-to-day discussions and news reporting in the country that you are fact-checking, otherwise you are almost blind.So obviously we need to make sure that it's well anchored into the system that you factcheck.( 117) Some fact-checking actors found it problematic that technologies are not available in their native languages, whereas others discuss there is little need.As a fact-checker said, [Small country] Twitter is so small that you don't need a tool to monitor it.If one person is tweeting something that is relevant enough so the [small country] Twitter bubble sort of goes off with it, then we will realize just by looking at the timeline, and it's the same with Facebook.( 237) Another fact-checker reflected on how a fact-checking organization operating in [continent] has trained and placed fact-checking actors in different capitals and state capitals so they have locally situated fact-checking expertise: "You have no idea what you're talking about, you haven't been to this place, you don't know these people, you don't know the background, so they understood that you really need to understand the politics to fact-check that correctly" (108).Another fact-checker said Fact-checking actors reflected that some national populations are turning more towards institutional sources, whereas others discuss that the opposite phenomenon is an issue.In some countries, such as China, the population uses their mother tongue for some platforms and English when using other platforms.An interviewee reflected: "[Y]ou have totally different populations using different platforms and different languages.And if you only fact check on one, that is the internationally approved way, you are essentially sort of leaving out a totally different population with other platforms" (120).Ultimately, geography and language carry important conditions for fact-checking practice problems.

Hostility Towards Fact-checking Actors
We need highly skilled people, which is rare enough, and then we need people who are also ready to accept these threats you know, and there are some things we can't change.We are a transparent organization, so we publish the names of our authors.We say we are transparent, so we explain how we work.We are transparent, so we publish where you can approach us.(233) Fact-checking actors work in an online-and platformed environments, and the sociotechnical conditions expose them to high risks.Just as news publishers and journalists jeopardize their safety through work, fact-checking actors are exposed to various threats including online harassment, legal intimidation, and life-threatening attacks by a diverse set of antagonists.Such actors include official and unofficial representatives of state power; organized networks and groups in civil society; and members of the wider public (i.e., audiences).A fact-checker discussed how they have had bomb squads come to their offices to safely remove bombs, while not changing their fact-checking practices and policies around authorial transparency (233).Other fact-checking actors described ongoing personal attacks and harassment from actors such as right-wing politicians as well as domestic foreign publications that aim to discredit fact checkers and the fact-checking unit (115).Others pointed to notions of self-censorship, reflecting on having to weigh pros and cons concerning issues to fact-check and asking themselves, "Is this worth getting ourselves into trouble for?" (113).
Fact-checking actors also shared the need for securing technical systems and staff, such as in countries with hostile governments, though they acknowledged that this can be challenging when expanding operations into new terrains.Some fact-checking organizations have taken systematic measures to reduce the visibility of fact-checking actors online and created filters so they are not easily approached and exposed to threats and harassment.Fact-checking actors reported they encountered threats and harassment: "People find my personal email, people find my personal Facebook and Twitter and send nasty messages frequently" (116).The widespread concerns about safety and harassment affect workplace environments as well as their potential for recruitment (Bélair-Gagnon et al. 2023b).In 2022, the IFCN launched a global Legal Support Fund for fact-checkers (Bealor 2022).

Fact-Checks Fueling Misinformation
In our experience, the polarization has grown enormously, and people are just way more willing to believe any rumors that confirm their worldview because they distrust the media.They distrust experts.They distrust anyone that they view as on the other side and anything that confirms their worldview is accepted uncritically.(141) Fact-checking actors raised a set of problems around potential negative outcomes of their efforts, which paradoxically may further entrench misinformation.A fact-checker said: In fact-checking, really the goal is for people not to read the story.We're trying to mark it false, so that it stops the spread of it.(…) 'It's like putting it on the record, but the point is really to sort of stop these stories from going viral.(116) In this context, there is an inherent contradiction at play when a fact-check may result in fueling the original problem of misinformation.This happens in at least three ways.First, fact-checking actors were aware of studies suggesting that exposure to corrections may trigger a so-called "backfire effect" whereby recipients rationalize and deepen their commitment to false beliefs.Subsequent research has cast doubt on this widely discussed phenomenon (Wood and Porter 2019).Second, fact-checking actors faced the constant concern that publishing a fact-check may augment the spread of misinformation, drawing more attention and exposure to an item of misinformation.A fact-checker reflected: "Something we thought about a lot going back to 2016 and further back was, when is it worthwhile to debunk something and when should we just leave it alone at the risk of giving it more oxygen?"(105).In discussions of these effects, a factchecker said: "This is unfortunately already an old story because the fact check will never be as viral as the false claim" (237).Third, published fact-checks may be deliberately taken out of context and used for spreading misinformation.A fact-checker stressed how fact-checks can be misused and do harm: "I even think about it when I'm wording sentences and paragraphs of like, could someone take a screenshot of this and put it out of context?" (108).
Fact-checking actors discussed that there are limits as to what audiences they can influence with their fact-checks.One said: "I can only convince those already convinced" (105).Fact-checking actors discussed that they receive a significant number of leads from the public via electronic communication, and they have enrolled students and others to go through such materials on their behalf.These actors also reflected on problems associated with the public having diverging knowledge, different modes of interpretation, and different levels of trust in sources: Where it gets complicated, I think, is that not everybody has a shared sense of reality these days when they're interpreting the information … It's the way I think people interpret the data that we put out there.That is where I see the problems with misinformation.(111) Another said: "People take in so much information on a given day, that they know they can't sort out what is real and what is not.It creates this climate where people don't know what's true.There's a lot of distrust" (143).

A Matrix of Fact-Checking Problems
Table 1 features the five problem descriptions, the key agents involved (the main ones listed first), and the key activities.Drawing on the 4A's analytical framework, it focuses on diverse agents and activities with the main problems identified.The agents and problems are seen through the perspective of fact-checking actors.Table 1 illuminates reflections on fact-checking as a sociotechnical and problem-solving practice, albeit focusing only on reflections around the identification of problems, while this article excludes the study of how they engage in solving these problems.

Conclusion
This article explored both how fact-checking actors reflect on misinformation problems, and the problems central to those fact-checking actors.Our analysis approaches factchecking as a sociotechnical and problem-solving practice.Our findings align with and contribute to existing scholarship on creative problem solving in organizations.For example, we found significant similarities between processual, selective problemsolving practices in fact-checking organizations and documented practices in other types of organizations (see Vernon, Hocking, and Tyler 2016).Successful companies are often associated with creating products or services that help solve customer problems (e.g., developing sophisticated customer support AI systems).Fact-checking actors similarly approach misinformation problems selectively, transforming some into fact-checking problems and producing outcomes in the form of informative fact-checks.The produced fact-check addresses whether a piece of information is truthful or not, but it does not provide a definitive solution to the problem of misinformation.
This study showed that, far from confronting all "misinformation problems," fact-checking actors expressed concern about a narrower set of specific fact-checking problems.We use the term "fact-checking problems" to refer to the diverse and distinct types of misinformation problems that fact-checking actors perceive as theirs to solve, or to participate in solving alongside partners.This finding is straightforward and very significant.Despite vast amounts of misinformation spreading, only certain problems are perceived as worth Table 1.The five problem areas fact-checkers face when they try to solve problems of misinformation.

Problem description Key agents Activities
The limited affordances of digital technologies The affordances and limits of technological systems and tools, as well as the sociotechnical infrastructures of platforms.shape conditions for spreading misinformation, as well as factchecking.

Actants:
The technological tools and systems as well as platform infrastructures has powerful agency.
The Imminent risks of fact-checks being misused, and that distribution of fact-checks exposes fact-checking actors to hostile actions.Factchecking organizations must recruit fact-checking actors prepared to deal with hostility and threats, while some take measures to reduce exposure to threats to them and their sources.

Fact-checks fueling misinformation
The solution to the problem of misinformation, in the form of a factcheck, may paradoxically fuel rather than solve the problem.

Audiences:
The paradox of the public becoming more aware of misinformation through fact-checks about misinformation.
Well-understood difficulties associated with successfully distributing factchecks to those needing it most, in a way that effectively counters misinformation.Fact-checking actors consider backfire effects associated with fact-checks taken out of context or drawing more "oxygen" to misinformation.
fact-checker epistemic efforts.There is a dynamic set of factors substantially limiting what misinformation problems fact-checking actors identify as theirs to prioritize for factchecking.Fact checkers predominantly focus on verifying contents such as checkable truth claims and the authenticity of images, while giving less attention to specific (networks of) actors and associated behaviours.This article advances knowledge into how fact-checking professionals attempt to define misinformation as a problem.The factchecking community subsequently develops practical responses and solutions, and these are further shaped by the sociotechnical environment.For example, as the factcheckers identify problems to solve, some require them to advance their expertise by learning how to use unfamiliar technologies.Members of the IFCN have developed inter-institutional collaborative networks in which they recurrently have practicefocused interactions associated with such sociotechnical challenges.As such, they form a so-called "community of practice", characterized by continuously evolving practice (Brookes and Waller 2022; c.f. Graves and Lauer 2020).
The problem-solving nature of fact-checking practice means that reflexive constructions of misinformation problems relate to fact-checking actors' resources, experiences, and ambitions.Their reflexive, discursive understanding of misinformation problems determines whether fact-checking actors will invest epistemic efforts.These epistemic efforts can include whether they will identify misinformation, engage through verification procedures, and produce a fact-check.Thus, fact-checking actors-and their community of practice-define what they will prioritize and with what they will engage.These priorities significantly influence what is fact-checked, which subsequently influences the societal impact of misinformation.
Identification of fact-checking problems to prioritize is linked to perceived problem areas, as reflected upon by the fact-checking actors.Our findings yield five key problem areas, analyzed by drawing on the sociotechnically oriented 4A's framework (Lewis and Westlund 2015): 1.The limited affordances of digital technologies 2. Limited agency on platform infrastructures 3. Limited expertise and human resources 4. Hostility toward fact-checking actors 5. Fact-checks fueling misinformation These five problem areas contribute to how fact-checking actors perceive they can do their work, while also highlighting how technological actants constrain their work.Factcheckers are constantly involved in what Pickering (1995:, 22) calls a "dance of agency," in which they have the agency to use some technological actant but then are constrained by the agency of the same actant, thereby going back and forth between active and passive work.
Making priorities in fact-checking practice is impacted the urgency and criticality of the misinformation problem, but perhaps even more so by sociotechnical and organizational resources.For example, while there are problems reaching those exposed to misinformation and changing their perceptions, fact-checking actors do not necessarily perceive this as their problem to solve.We find that fact-checking actors gravitate more towards solving the problems of identifying and verifying misinformation; some may feel they have completed their job once the fact-check has been published.This article also showed that fact-checking actors may prioritize fact-checking certain misinformation because of its perceived "solvability", based on existing resources.We recognize that fact-checking actors prioritize some fact-checks based on platform partnerships requiring them to debunk online falsehoods.They may also prioritize other topics such as political debates-despite the lack of partnership support-because these topics harmonize more with professional ideologies.There is also a tension between prioritizing seemingly harmful misinformation, requiring substantial work and resources and being associated with uncertain outcomes, vis-a-vis prioritizing misinformation that that can be factchecked relatively quickly.
This article charts problems relating to (a) the affordances and limits of platforms and technologies, and (b) the subsequent limited and diverging agency of fact-checking actors.Our findings suggest that the relationship between fact-checkers, platforms, and technologies can be compared with a situation in which a third-party cleaning company is hired to perform cleaning in buildings.We can refer to this as the symbolic front stage cleaning metaphor, where fact-checkers are like a cleaning company that is hired to clean the buildings but are restrained from doing so properly due to both limited access and inadequate cleaning gear.For some buildings the cleaners only get access to the public lobby area, trying to clean whatever dirt is visible for identification there, with the cleaning gear that the owner of the building provides them.At other buildings, the cleaning company is offered more access in terms of areas to clean and gear to use.The owners of the buildings control the access and gear provided to the cleaning company, yet it is the cleaning company that is publicly displayed as being responsible for how clean the buildings are.This symbolic front stage cleaning metaphor implies that the owners of the buildings are preoccupied with keeping up a clean public self-presentation but will not be held accountable for the dirty backstage.The platform companies' relationship with fact-checking actors resembles such a performative action, in line with Goffman's (1959) classical analysis of how self-presentation behaviors vary depending on whether these take place on the public-facing and visible front stage, or on the less visible and accessible backstage.
In sum, this article offers novel insights into key problem areas in fact-checking.In problem-solving, there are close links between the identification of problems, the prioritization of resources, and taking different actions to solve problems.The factchecking movement has been significantly recognized amid the increased attention to misinformation in public debate.We recognize that fact-checking actors carry out important work.Yet we stress that the "solutions" to misinformation, offered in the form of fact-checks, only go so far.There are significant challenges in reaching and affecting publics with such fact-checks.Even when this succeeds, misinformation continues to flow.This article shows how fact-checking actors, with the overall problem of misinformation, develop norms and epistemological practices that align with their understanding of what problems are theirs to possibly solve, amid their limited agency on platforms and their imperfect knowledge and constrained access to credible sources.Future research should investigate the problem-solving and socio-technical nature of fact-checking, focusing on how technologies are used to solve diverse problems.

I
am extremely skeptical[…]  around the whole concept of technological approaches to verifying and fixing what counts as a true for a fact.So, this is where you very quickly run into a load of Bitcoin and blockchain, enthusiasts.There's something that is not feasible, because we live in the real world, but (…) what can technology do to help with establishing a mutually agreed upon central repository of facts.(104) Y]ou want to check a claim about something in [country] and you don't know if this image is really from [country], then you can contact the XYZ person in [country] and ask him if he knows if this place is from [capital city].(238) [