The Online Misinformation Engagement Framework

Research on online misinformation has evolved rapidly, but organizing its results and identifying open research questions is difficult without a systematic approach. We present the Online Misinformation Engagement Framework, which classifies people ’ s engagement with online misinformation into four stages: selecting information sources, choosing what information to consume or ignore, evaluating the accuracy of the information and/or the credibility of the source


The Online Misinformation Engagement Framework
A rapidly evolving body of research aims to understand people's engagement with online misinformation and to identify ways to address this issue using cognitive and behavioral interventions.Much of the current research has focused on how well people discern accurate information from misinformation and whether they are willing to share misinformation online (for a literature review, see [1]).But what are the cognitive and behavioral stages of people's engagement with online misinformation?And how can cognition and behavior at these stages be effectively targeted with different types of interventions?
This article introduces the Online Misinformation Engagement Framework, a first attempt to unify the different stages of engagement with online misinformation under a single conceptual umbrella.Unlike previous review articles that summarize findings on the antecedents of and interventions against believing and sharing misinformation (e.g.[1e4]), we offer a unifying framework that maps and systematizes the core stages at which people engage with online misinformation.Along with organizing results and identifying research gaps, the framework also aims to inform future interventions that target people's cognition and behavior at each stage.Although we focus on misinformation, our framework is applicable to engagement with information online more generally.

The four stages of engagement with online misinformation
The literature explicitly and implicitly points to a set of four core stages at which people engage with online misinformation (Table 1): source selection, information selection, evaluation, and reaction.Decisions that people make at each stage affect the other stages.For instance, the online sources a person selects shape the information they encounter and subsequently choose to either consume or ignore.Note that the stages of the framework are iterative, not strictly chronological.For example, realizing that a piece of information is inaccurate may prompt a person to unfollow the source of that information.The stages of engagement with online misinformation also tend to leave digital traces (Box 1).

Source selection
Source selection refers to a person's curation of sources within their online information environment.People can essentially design their own online information environments by selecting their sources (e.g., online newspapers, channels, blogs, podcasts, or individuals), which can vary dramatically in the quality of information they provide.This self-driven source selection is subject to environmental constraints such as a platform's affordances and network structure [29].Furthermore, information is usually filtered and ranked by search engines and recommender systems, which impacts potential exposure to information from self-selected sources [30,31].On social networks, people tend to interact with like-minded peers, forming ideologically homogeneous social clusters (for a review, see [32]).A growing literaturedbased largely on web browsing and social media datadhas started to examine the downstream effects of source selection.For example, increasing exposure to partisan media by changing browser default settings and social media following patterns can erode trust in mainstream media [33].However, from all the four stages of engaging with misinformation online, the psychological mechanisms underlying source selection are the least understood.

Information selection
People also decide whether to consume or ignore the information they are exposed to.The challenge of information selection is intensified in the digital realm, where platforms curate content to attract attention [34] and valid credibility cues from the offline world (e.g., professional design or elevated Source cues [5], source likemindedness [6], mindless access [7] Source credibility labels [8], friction [9] Information Novelty seeking [10], negativity bias [11] Labels and warning signs [12], critical ignoring [13] Evaluation Evaluating the accuracy of the information and/or credibility of sources Reviewing the information for consistency with memory, leaving a website to vet it and its information (lateral reading) Specific sources or pieces of information Accuracy/credibility ratings, confidence, self-reported or inferred use of assessment strategies Intuitive thinking [14], cognitive failures [15], illusory truth [16], source cues [5], emotion [17], worldview [2] Debunking [18], lateral reading [19,20], inoculation [21], media literacy tips [22] Reaction Judging whether and how to react to the information Clicking a "share" button, commenting on a post
The source selection stage can be measured by traceable user actions that are usually constrained by the platforms, such as the list of followed accounts on social media.Similarly, the reaction stage is based on a predefined set of possible actions, such as liking a post.Actions at the information selection stage are less traceable, but can be quantified by clicks, dwell time, or eye movements.The evaluation stage is usually only explicitly quantified in online studies (e.g., through accuracy ratings), but can also leave digital traces such as browser tracking data (e.g., visiting fact-checking sites).Traceability offers the potential for different outcome measures for interventions at each stage.
social endorsements) are easy to fake online [35].
Research has started to explore how people decide whether to consume a certain piece of content.For instance, people tend to select for content that is negative, social, predictive of the future, and consistent with prior beliefs [11].Moreover, in a simulated social media environment, people attend more to sensational than to credible content [36], and the presence of videos and photos increases dwell time for online news articles [37].Furthermore, misinformation can be more novel and elicit different emotional responses than true information [10].In sum, misinformation can exploit human psychology because it does not need to be grounded in reality.
Although misinformation constitutes only a small portion of people's online information consumption [38], certain demographics are more likely to be exposed to low-quality content, including conservatives and older adults [39,30].It is still unclear, however, how people allocate their attention online [13], and the extent to which misinformation susceptibility is a result of decisions made at the information selection stage.

Evaluation
Once people are exposed to information, they may decide to evaluate it.This entails not only distinguishing accurate information from falsehood (e.g.[14]), but also separating low-quality from high-quality sources (e.g.[20]).The use of AI to create false but credible content at scale may amplify the challenge of evaluating information online (although there is still an ongoing discussion about the actual relevance of AI, see [40]).To successfully evaluate a piece of information for accuracy or a specific source for credibility, people must first decide to engage in this process.Research suggests that people often neglect to do sodfor example, many share social media posts without reading beyond the headline ( [41]; note that this finding illustrates that the framework's four stages are not meant to suggest a chronological and static order).Several competencies can help people evaluate information online.For instance, digital literacy encompasses the skills to navigate the online landscape, and assists with discerning credible from unreliable sources and recognizing common tactics for spreading misinformation [42].Other individual-level factors may also be relevant.A recent review identified several cognitive and socio-affective drivers of false beliefs about both information and its source [1]: lack of analytical thinking [14], memory failures [15], illusory truth through repeated exposure [16], unreliable source cues [5], and emotional [17] and worldview-related influences [2].However, even after successfully evaluating information for accuracy, people may still be influenced by misinformation they initially believed [43], and may choose to react to the information even if they judge it to be false.

Reaction
The reactions available to people online depend on the platform's choice architecture.For example, Facebook provides a range of emotive reactions and more traditional social networking features for engaging with posts (e.g., comments and shares), whereas TikTok emphasizes short-form video and creative interactions (e.g., "Duets," where users record their reaction alongside the original content).Research has mainly focused on who shares misinformation rather than why it is shared.Our ongoing work linking people's Twitter shares with their motives for sharing suggests that most people share information to express their opinion, connect with others, or draw attention to a topic.Although most people do not share misinformation [39], attending to factors other than accuracy can lead people to share misinformation even when they realize it is false [23].Intentionally sharing misinformation is rare and is driven by motives including signaling group affiliation [44], self-promotion [45], and inciting chaos [46].Another line of research suggests that social media promotes habitual sharing of misinformation ( [47]; see also [48]).Importantly, not all reactions to misinformation on social media propagate it.Many people report seeing misinformation corrected (including by other social media users) and being corrected themselves by others when sharing misinformation [49].These corrections may be effective irrespective of tone ( [50]; see also [51]) (Box 2).
Box 2. Two typical individuals online.
Consider two individuals who adopt inherently different approaches at the source selection stage.Individual A meticulously follows social media accounts known for providing high-quality information, while Individual B indiscriminately follows accounts without considering their information quality.These differences in source selection affect both individuals' engagement with online misinformation at the other stages of the framework.At the information selection stage, Individual A encounters predominantly high-quality information, while Individual B wades through a mix of low-and high-quality content.At the evaluation stage, Individual A may rely on their high-quality source selection, which could occasionally lead them to fall into the trap of believing an inaccurate article (e.g., an article that pops up in the news feed because a friend shared it on social media), while Individual B may or may not be motivated or have learned to always evaluate information.This also illustrates that people may not always go through all four stages of the framework.For instance, Individual A regularly evaluates and selects highquality sources, and therefore tends not to evaluate specific information.Individual B, on the other hand, often selects information and may, with sufficient motivation and time, make a habit of evaluating it.At the reaction stage, Individual B may be more likely to share misinformation simply due to being exposed to more of it.Tailored interventions for different stages and different individuals are crucial: Individual A may benefit most from interventions promoting the evaluation of specific pieces of information (e.g., interventions that teach lateral reading), whereas Individual B may benefit most from interventions that encourage higher quality source selection or discourage sharing misinformation (e.g., interventions that introduce friction).

Entry points for interventions
Mapping the stages of engagement with online misinformation highlights that there are distinct entry points for interventions (see also Table 1).We now turn to behavioral and cognitive interventions targeting individual behavior, beliefs, attitudes, and competences.This includes all efforts before, during, and after people engage with online misinformation.Our aim is not to cover all interventions that have been tested in the literature, but rather to focus on a few prominent examples.
At the source selection stage, apps that introduce friction can help reduce the use of low-quality information platforms or services (e.g., the "one sec" app; [9]).These apps fall under the umbrella of self-nudging interventions, which seek to empower people to design their own choice architectures in order to make decisions in line with their goals [52].Verified and transparent labels can also create awareness of the importance of selecting reliable information sources (e.g., NewsGuard).However, a recent field study found that, on average, source credibility labels had limited effects on visits to low-quality online sources, but did improve the news diet quality of the heaviest consumers of misinformation [53].
At the information selection stage, warnings and content labels can alert people information is unreliable or outdated [12], such as the label that The Guardian adds to older articles.Another tool for addressing low-quality information online is critical ignoring [13], the ability to choose what to ignore and where to invest one's attention.Critical ignoring relies on a set of cognitive strategies aimed at resisting certain types of information.
For instance, people may choose to ignore clickbait articles or content that they have learned is manipulative [54].
At the evaluation stage, debunking is a prominent intervention that provides corrective information in order to reduce false beliefs or misconceptions [18].Another intervention involves learning and practicing lateral readingdleaving a website to check what other credible sources say about the source or information.This intervention could entail ensuring that online environments link to external information and provide cross-references [19] or teaching the technique in classrooms [20].Yet another intervention is psychological inoculation, in which people are preemptively exposed to a weakened dose of common misinformation strategies in order to make them more resilient to future manipulation attempts [21]. 1 Some of these interventions may be administered together in the form of brief media literacy tips [22].
At the reaction stage, directing people's attention to the concept of accuracy may help curb the spread of misinformation online [23].Similarly, introducing friction (e.g., asking, "Want to read this before sharing?") prompts people to pause and think instead of acting on an initial impulse [26].Finally, increasing the salience of social norms such as the descriptive norm that most people disapprove of sharing misinformation online may encourage people to act accordingly [28].
Directions for future research

Conclusion
Misinformation is a defining characteristic of today's online information environment.This article introduces a framework to better organize the stages of engagement with online misinformation and identify entry points for effective targeted interventions.We emphasize that individual-level interventions against online misinformation must be accompanied by system-level changes (e.g., stronger regulation of social media) to fully address the issue [62].Ultimately, some stages may require substantial changes to the online environment (e.g., 1 A recent reanalysis of earlier inoculation studies suggests that the intervention elicits more conservative responding (i.e., judging both true and false news as "false") rather than making people better able to discern between true and false news

Table 1
The Online Misinformation Engagement Framework.
The Online Misinformation Engagement Framework is intended to identify gaps in the literature and guide future research by systematically mapping people's engagement with online misinformation.How do people select their sources of information online?How do they decide what information to consume or ignore?And what interventions are effective at each stage?We see great potential in using recently introduced social media simulators to study these questions[55,56].These tools offer the opportunity to investigate scenarios that closely resemble real-world situations and track metrics such as dwell time and provide dynamic feedback (e.g., changes in follower count), usually akin to field research, while maintaining experimental control.