Conspiracy theories: why they are believed and how they can be challenged

ABSTRACT The current study aimed: (i) to identify personal characteristics associated with endorsing conspiracy theories; and (ii) to investigate methods for dispelling conspiracy beliefs. Participants were shown a single conspiracy theory and they also completed questionnaires about their reasoning skills, types of information processing (System 1 vs. System 2), endorsement of paranormal beliefs, locus of control and pattern perception. To challenge the endorsement of the conspiracy, participants read either: (i) neutral information; (ii) a critical analysis of the vignette; (iii) a critical analysis of the vignette with discussion of realistic consequences; or (iv) a critical analysis of the vignette with “feeling of control” priming. Only addressing the consequences of the conspiracy theory decreased its endorsement. Furthermore, only type of information processing and belief in paranormal phenomena, were associated with endorsement of the conspiracy. These findings are discussed in relation to previous studies and theories of conspiratorial ideation.

The COVID-19 crisis has demonstrated the consequences of people acting on conspiracy theories. Those who believed that the virus did not exist or that vaccination was intended to harm the population, often breached the recommended safety standards, or refused to get vaccinated, thereby compromising communal immunity (Imhoff & Lamberty, 2020;Tomljenovic et al., 2019). This raises the question of how to decrease or at least challengethe endorsement of conspiracy theories, since believing in or agreeing with them is a first step towards acting upon them (Jolley & Douglas, 2014). It has been shown that endorsement of conspiracy theories is driven by non-analytic processing (Gligorić et al., 2021;Pytlik et al., 2020;Swami et al., 2014), the tendency to perceive deliberate patterns in random data (Van Prooijen et al., 2018) and a sense of low personal control (Van Prooijen & Acker, 2015), as well as often being accompanied by holding other unsupported beliefs such as those in paranormal phenomena (Rizeq et al., 2020;Van Prooijen et al., 2018). Although the body of research highlighting personal factors that are associated with belief in conspiracy theories is growing (e.g. Gligorić et al., 2021;Pytlik et al., 2020;Rizeq et al., 2020), there is less experimental research identifying effective measures to reduce such beliefs. Consequently, the present study evaluates the extent to which three intervention approaches, inspired by the proposed aetiology of conspiracy theories, can reduce agreement with them.
Conspiracy theories are defined as theoretical explanations for important events that attribute the agency behind the events to covert plans carried out by influential groups with malicious intent . Douglas and colleagues (2017) posited that people adopt conspiracy theories because they appear to gratify three major needs. First, the need to understand (epistemic need), as conspiracy theories provide vague, yet internally consistent and intuitive explanations for events. Second, such explanations in turn allow people to feel secure (existential need). Third, they also offer an opportunity to strengthen in-group cohesion and sustain a positive image of the group by relegating the responsibility for negative events to others (social need). Importantly, Douglas et al. (2017) suggest that, although people adopt conspiracy theories to fulfil these needs, the needs are not ultimately satisfied. Building on their work, Van Prooijen (2019a) proposed the existential threat model of conspiracy theories. According to this model, the presence of an existential threat (i.e. when anxiety arises from fundamental values or beliefs being challenged) serves to increase the need to understand the environment. Furthermore, if, when this happens, there is an antagonistic or despised group, this group starts to be deemed as "them" (as opposite to "us") and become the conspirators.
To decrease the negative consequences of endorsing conspiracy theories, the current study aimed to identify the most effective approach to reducing belief in them. Currently, there exists a small body of research looking at combating conspiracy theories rather than outlining the variables associated with them. However, as understanding a phenomenon helps to shape it, the present study also examined several variables associated with conspiracy theory endorsement.

Dual processes in thinking
An understanding of the apparent dual-process nature of thinking is fundamental to the investigation of belief in conspiracy theories because such processes relate to the nature of the quality checks that people apply to information, thereby determining which components of available information will be adopted. According to the dualprocess framework espoused by Stanovich (2013a, 2013b), Type 1 processes can be described as intuitive, heuristic and associative in nature, and are defined in terms of being relatively undemanding of working-memory resources. Furthermore, correlated features of Type 1 processes indicate that they tend to be high capacity, rapid, non-conscious and capable of running in parallel. In contrast, Type 2 processes can be described as reflective, deliberate, analytic and controlled, and are defined in terms of requiring working-memory resources and being dedicated to hypothetical thinking. In addition, correlated features of Type 2 processes indicate they are slow, capacity limited, serial and conscious. Type 2 processes are less prone to biases in comparison to Type 1 processes, although they are not invulnerable to such biases, which may arise, for example, from the application of defective analytic operations (e.g. Evans, 2018;Evans & Stanovich, 2013a, 2013b. The Type 1 versus Type 2 distinction that we have outlined is very closely aligned with the System 1 versus System 2 distinction popularised by Kahneman (e.g. 2011). In the remainder of this paper, we employ the System 1 versus System 2 terminology as this reflects the predominant dual-process distinction that has been invoked in the literature on conspiracy beliefs.
System 2 processing has consistently been shown to have a negative association with the endorsement of conspiracy theories (Bonetto et al., 2018;Orosz et al., 2016;Pytlik et al., 2020;Swami et al., 2014;Van Prooijen, 2017). For example, Swami et al. (2014) provided experimental evidence that priming System 2, analytic processing reduced endorsement of conspiracy theories as compared to a baseline condition. Likewise, Bonetto et al. (2018) showed that priming a critical approach to information increased resistance to the acceptance of conspiracy theories as compared to a no-prime condition. Furthermore, Orosz et al. (2016) demonstrated that providing rational counterarguments to an artificially created conspiracy theory reduced belief in it. Extending these findings, Van Prooijen (2017) reported that System 2 analytic processing mediated the relationship between low education level and belief in simple solutions, which in turn was associated with greater endorsement of conspiracy theories. Specially, education level was positively associated with analytic processing, which was negatively associated with belief in simple solutions.
Conversely, System 1 processing has been shown to be positively associated with a tendency to adopt a hasty generalisation fallacy and with endorsement of conspiracy theories (Pytlik et al., 2020). Moreover, from a review of the literature, Van Prooijen (2019b) concluded that belief in conspiracy theories results from overreliance on intuitive System 1 processing. A study by Gligorić et al. (2021), however, showed that when both intuitive System 1 processing and analytic System 2 processing were incorporated into a larger model predicting the endorsement of conspiracy theories, only the latter had a significant negative association with endorsement rates. This raises the question of whether it is a proclivity to use System 1 intuitive processing versus a proclivity not to use System 2 analytic processing that underlies the endorsement of conspiracy theories. The current study aimed to answer this question by including operationalisations of both systems simultaneously in a model predicting conspiracy theory endorsement.

Pattern perception
In addition to System 1 versus System 2 processing, belief in conspiracy theories has been shown to be positively associated with the tendency to perceive deliberately created patterns in randomly generated information (Van Prooijen et al., 2018). Walker et al. (2019) demonstrated that establishing non-existent patterns was generally related to accepting "pseudo-profound bullshit", which represents a substantially empty statement claiming deeper meaning by virtue of having a seemingly imposing form. Importantly, Walker et al. (2019) also reported that the relation between illusory pattern perception and bullshit receptivity was largely unaffected by the inclusion of an index of System 2 processing as a covariate, even though previous research has found that individuals who are less likely to engage in analytic thinking are more receptive to pseudo-profound bullshit (e.g. Pennycook et al., 2015).
These findings suggest that grand but empty statements appear to satisfy a specific need to establish patterns, rather than engendering poor analytical assessment of their validity and content. To assess whether endorsement of conspiracy theories is also motivated by a desire to establish patterns or is a consequence of invalid estimation of their accuracy, indices of Systems 1 and 2 processing were considered in the present study together with those of illusory pattern perception.

Feelings of control
Corresponding to the existential need proposed by Douglas et al. (2017), Van Prooijen and Acker (2015) demonstrated that an increased feeling of control decreased belief in conspiracy theories. Participants who were asked to remember a situation where they had been in complete control were less likely to endorse conspiracy beliefs as compared to those who were asked to recall what they had for dinner. Similarly, Federico et al (2018) showed that the perception that society is changing for the worse is associated with the endorsement of conspiracy theories. The threat of societal change is suggested to undermine an individual's conceptualisation of society and their place within it and thereby their affective perceptions of themself based on these constructs. Taking this into account also suggests that when an individual expects or witnesses such threat of change, they feel loss of control as they cannot maintain the status quo. These results were replicated with respect to the COVID-19 pandemic, as the belief that the pandemic could not be controlled was found to mediate the relationship between the perceived risk of the virus and the endorsement of conspiracy beliefs associated with it (Šrol et al., 2021).
This relationship between situational changes in feelings of control and belief in conspiracy theories, lends the foundation for a possible association between personality factors describing tendencies in control attribution. Indeed, external locus of control, as the attribution of control over action, events and consequences to others, has been suggested to contribute to conspiracy beliefs (Biddlestone et al., 2020), although recent research has not directly addressed this possibility. Thus, the present research addressed this gap by adding the Internal External Locus of Control, Powerful Others, and Chance subscales of Levenson's Multidimensional Locus of Control scale (Levenson, 1981) to the predictors of endorsement of conspiracy theory.

Unwarranted beliefs
Endorsement of conspiracy theories has also been consistently reported to be positively associated with acceptance of further unwarranted, implausible or paranormal beliefs (Darwin et al., 2011;Drinkwater et al., 2012;Dyrendal et al., 2021;Lobato et al., 2014;Rizeq et al., 2020;Van Prooijen et al., 2018). For example, Darwin et al. (2011) demonstrated positive correlations between believing that paranormal phenomena (e.g. witchcraft) are real and endorsing conspiracy theories. Similarly, Drinkwater et al. (2012) reported a positive association between holding paranormal beliefs and poor reality testing, arguably representing lack of critical evaluation, and endorsement of globally known conspiracy theories (e.g. about the Apollo 11 landing or Adolf Hitler). Lobato et al. (2014) extended these latter results by demonstrating that beliefs in paranormal phenomena, in conspiracy theories and in pseudoscience are intercorrelated and Rizeq et al. (2020) replicated the findings using structural equation modelling. Findings reported by Van Prooijen et al. (2018) suggest that the close relationship between belief in conspiracies and in paranormal belief is a consequence of illusory pattern perception. Meanwhile, Dyrendal et al. (2021) reported paranormal beliefs to function as a positive mediator between schizotypal thinking and endorsement of conspiracy theories. Given that both conspiracy theories and beliefs in the paranormal are different forms of unwarranted or poorly substantiated beliefs, reflecting what is aptly named by Stanovich (2009Stanovich ( , 2011 as "contaminated mindware", then their connection is unsurprising. In their review, Van Proojien (2019b) used this relationship between belief in conspiracy theories and the paranormal as evidence that the former are based in predominantly System 1 processing. However, this is not necessarily the case. Pennycook et al. (2012) have demonstrated that paranormal and theistic 1 beliefs are inversely related to analytic processing. They further make the argument that paranormal beliefs are counterintuitive, as they are supernatural (i.e. they are above or more than natural) and hence contradict natural laws and forces. This inverse relationship was replicated by Rizeq et al. (2020) simultaneously for conspiracy and paranormal beliefs. Consequently, returning to the question about System 1 and System 2 processing, endorsement of paranormal and conspiracy beliefs might be rooted in poor analytic skills rather than an overreliance on intuition. Thus, the present study aims to clarify this possibility.

Aims of the current study
The aims of the current study were twofold. First, following the suggestion of Gligorić et al. (2021) about multivariable investigations, the study assessed the relationship between the variables that we have outlined and the endorsement of conspiracy theories in a single model, rather than separately. Specifically, the current study included the following variables that reflect individual differences in cognition: (i) indices of intuitive System 1 processing versus analytic System 2 processing; (ii) a measure of illusory pattern perceptions; (iii) a measure of locus of control; and (iv) an index of belief in the paranormal.
Furthermore, the present investigation extended previous research with the addition of a measure of people's overestimation of their reasoning skills. Pennycook et al. (2017) have shown that those who overestimate their analytic abilities are less likely to apply them effectively when needed. Given the reported inverse relationship between effortful reasoning and endorsement of conspiracy theories (Gligorić et al., 2021;Pytlik et al., 2020;Rizeq et al., 2020;Swami et al., 2014;Van Prooijen, 2017), it was expected that those who overestimate their reasoning skills would be more likely to endorse conspiracy theories. Indeed, in a recent article, Binnendyk and Pennycook (2022) have speculated about the potential relevance of people's overestimation of their reasoning skills and knowledge on the acceptance of conspiracy theories. Thus, the current study is the first directly to test this possibility.
The second key aim of the present study was to identify an effective approach to reduce the endorsement of conspiracy theories. Although simply pointing out that a theory is a conspiracy is insufficient to undermine its endorsement (Wood, 2015), facilitating analytic reasoning is likely to decrease both belief in conspiracy theories (Swami et al., 2014) and corresponding behaviour through dispelling myths about the object of the theory (Jolley & Douglas, 2014). Given that Orosz et al. (2016) have shown that logical arguments discrediting conspiracy theories can decrease their endorsement, this was one of the "intervention approaches" adopted in the current study. As this intervention was focused primarily on logical inconsistencies and the application of formal logic, this condition also partially mirrored the "technique rebuttal" approach that has been suggested to be effective in reducing acceptance of misinformation (McIntyre, 2021;Schmid & Betsch, 2019).
Another intervention method that was expected to dispel belief in conspiracy theories was inspired by the series of studies conducted by Van Prooijen and Van Dijk (2014). They demonstrated a positive association between the extent to which conspiracy theories have far-reaching and grave outcomes for their objects and people's belief in them. As such, it is likely that apart from dispelling the nature of conspiracy theories, logical arguments additionally need to demonstrate that the consequences of the "event" are not as grave as a person might think them to be. The final intervention method adopted in this study that was expected to challenge the endorsement of conspiracy theories involved increasing a sense of personal control in people, using a recall procedure (Douglas et al., 2014;Šrol et al., 2021;Van Prooijen & Acker, 2015) prior to presenting reason-based counterarguments.
The current study followed the example of Orosz et al. (2016) and Van Prooijen and Van Dijk (2014) in using artificially created general conspiracy theories based on existing ones. The creation of the theories also ensured that, to the best of authors' knowledge, the conspiracy theories were false. However, to ensure that these theories were similar to those that exist, a pilot study was carried out first. In this pilot study, six conspiracy theories were created, and their endorsement evaluated with respect to the Generic Conspiracist Belief Scale (Brotherton et al., 2013).
Afterwards, in the main experimental study, the proposed model and the efficacy of three intervention approaches were tested using just one selected conspiracy theory. Thus, the present study provides an assessment of methods that can be applied to reduce beliefs in conspiracy theories in addition to establishing the most prominent factors driving such beliefs. It is also the first study to explore the endorsement of a conspiracy theory in terms of three distinct measures: agreement with the conspiracy theory, deeming it as true representation of world events, and perceiving it to be a realistic explanation of world events.
Although the study was not pre-registered, it follows the principles of best practice recommended by Simmons et al. (2012). In this respect, our method section details the determination of the requisite sample size, the nature of all selected measures, the use of exclusion criteria and the manipulations that were implemented in the study. Furthermore, the hypotheses that were formulated in advance of the study were as follows: I Endorsement of conspiracy theories will be positively associated with System 1 processing and negatively associated with System 2 processing. II Endorsement of conspiracy theories will be positively associated with people's overestimation of their reasoning skills. III Endorsement of conspiracy theories will be positively associated with an external locus of control and negatively associated with an acceptance of life being controlled by chance factors. IV Endorsement of conspiracy theories will be positively associated with the tendency to perceive patterns in randomly generated information. V Endorsement of conspiracy theories will be positively associated with beliefs in the paranormal. VI Purely logical refutation will decrease the endorsement of conspiracy theories more than exposure to information about the topic, but less than logical refutation combined with an increase in the sense of control or logical refutation combined with discussion of possible realistic consequences of the conspiracy theory.

Pilot study
The pilot study included 60 participants (53 female and 7 male; M age = 22 years old, SD age = 6.01) who were asked to read six artificially created conspiracy theory vignettes (provided in Appendix A in the Supplemental Material) and to rate whether the offered conspiracy theory was true, whether they agreed with the theory, and whether they thought the provided explanation was realistic. Participants were also asked to complete the Generic Conspiracist Belief Scale (GCB; Brotherton et al., 2013).
The results of the pilot study showed that from the six created conspiracy theories, three could be used as approximations for real-life conspiracies as they had weak to moderate correlations (ranging from r = .33 to r = .53) with belief in general conspiracy theories. These three conspiracy theories were on the following topics: major diseases being planned; changes in the power of a secret elite who treat ordinary people as collateral damage; and banking being designed by the wealthy to enslave the poor. The vignette that described planned diseases had a stronger correlation with the GCB (rho = .53, p < .001, for truth; rho = .45, p < .001, for agreement; rho = .47, p < .001, for realism) than either the vignette for change in power (rho = .37, p = .004, for truth; rho = .45, p < .001, for agreement; rho = .42, p = .001, for realism) or the vignette for banking (rho = .37, p = .003, for truth; rho = .33, p = .01, for agreement; rho = .44, p < .001, for realism), suggesting similarities to "natural" conspiracy theories (Brotherton et al., 2013;Drinkwater et al., 2020).
Given that the sample consisted of students, partial rather than complete perception of the planned disease theory as true or realistic, or agreeing with it, further highlighted its resemblance to real-life conspiracy theories, as participation in higher education has been shown to reduce endorsement of conspiracy theories (Van Prooijen, 2017;Van Prooijen & Van Dijk, 2014). Consequently, the planned disease conspiracy theory was chosen for use in the main study as it had some of the highest correlations with GCB and participants had varying ratings of agreement with it and of its truthfulness and realism.

Participants
Data were obtained from an initial sample of 334 participants, who were recruited via SONA 2 ￼ and the Facebook social media platform, where a link to the survey was posted. Participant recruitment was extended to non-university community members to increase the generalisability of our findings. Owing to the online nature of the study, only participants who had complete responses to each question were included in the final analysis. Ninety-four participants did not complete all questionnaires and were excluded from the dataset, as incomplete responses were taken to signify a participant's wish to withdraw from the study. The final analysis was, therefore, conducted on a sample of 240 participants (195 female, 44 male and 1 undisclosed; M age = 24.93 years old, SD age = 9.12). A full description of the sample is presented in Table B1 in Appendix B. Although the study was not pre-registered, G*Power (Faul et al., 2009) was used for sensitivity power analysis to verify that the sample had sufficient power (.95) to detect small to medium effect sizes when using hierarchical regression (f 2 = .11) and when using multiple regression comparing interventions (f 2 = .08), both based on the standard alpha level (.05).
The study was deployed through Qualtrics survey software.￼￼ The average time a participant took to complete the study on this platform was 32 min. In terms of the highest reported level of education, participants reported a minimum of an undergraduate degree (n = 131), a high-school diploma (n = 55), a master's degree (n = 19), a qualification from a Trade or Technical School (n = 17), a PhD or Doctorate (n = 15) and finally, a Middle School qualification (n = 3).

Design
The study followed a 2 × 4 pretest-posttest design. Given that Swami et al. (2014) and Bonetto et al. (2018) have shown that priming System 2 analytic processing can reduce endorsement of conspiracy theories, the current study aimed to account for the possible priming effect arising from the prior solving of logic problems to induce an analytic mindset. Therefore, participants were randomly assigned into logic versus no-logic groups. Within each group, participants were further divided into one of the four intervention conditions: (i) control; (ii) analytic; (iii) consequence; and (iv) autonomy.

Materials and tasks
Conspiracy theory vignette. All participants were presented with the "planned disease" conspiracy theory developed during the pilot study. This claimed that there is a small elite that controls pharmaceutical companies and starts epidemics involving new diseases to earn profit from selling treatments. Participants were asked to rate the conspiracy theory's truthfulness, their agreement with it, and its realism, in each case using the three questions from the pilot study. Before participants viewed the conspiracy theory the following warning was given: "Please read the following proposed (non-fact-checked) theory put forward by some, regarding historical and current events. You will be asked to note how much you agree with it." Reasoning skills self-assessment. Participants were first asked to rate their reasoning skills from "poor" (1) to "excellent" (7). Then they were presented with 16 3 syllogisms and probability estimation problems taken from De Neys and Franssens (2009), presented in a randomised order within each category (i.e. first randomised syllogisms and then randomised probability estimation). The answer to the self-rating question and the summed answers to the problems were converted into Z-scores. To index the accuracy of people's self-rating of their reasoning abilities, the Z-score of the total score from 16 problems was subtracted from the Z-score of the self-rating question. The resulting value represented the degree of the overestimation of reasoning skills.
Cognitive Reflection Test Long (CRT-L) version. To assess analytic and intuitive processing we used the CRT-L (Primi et al., 2016), which involves six items. The CRT-L was developed owing to the popularity of the original three-item version (Frederick, 2005), which increases the chances that participants will be familiar with it (Primi et al., 2016). An example of an item within the CRT-L is the following: "If three elves can wrap three toys in 1 h, how many elves are needed to wrap six toys in 2 h?". While the "intuitive" answer is six elves, the correct answer (otherwise known as the "reflective" response) is three elves.
The CRT-L has shown good convergent validity as the total score, summing correct answers, and new items only, are significantly correlated with the original CRT (Primi et al., 2016). Based on the research of Erceg and Bubić (2017) and Pennycook et al. (2016) about scoring procedures for CRT, the current study utilised two indices from the CRT-L: (i) CRT-reflective, which was calculated by summing all correct answers, with all incorrect responses being scored as 0; and (ii) CRT-intuitive, which was calculated by summing only intuitive incorrect responses, with correct and non-intuitive incorrect responses scored as 0. These two indices had acceptable internal consistency: CRT reflective Cronbach alpha = .76; CRT intuitive Cronbach's alpha = .69. For both indices, new CRT items were significantly correlated with the old items: CRT reflective, r = .59, p < .001, CRT intuitive, r = .48, p < .001.
Revised Paranormal Belief Scale (PBS-R). The PBS-R (Tobacyk, 2004) was utilised to assess the extent to which participants believed in paranormal phenomena. PBS-R is a 25 item Likert-scale questionnaire with seven subscales: (i) Traditional Religious Beliefs (e.g. "I believe in God"); (ii) Psi (e.g. "Psychokinesis, the movement of objects through psychic powers, does exist"); (iii) Witchcraft (e.g. "There are actual cases of witchcraft"); (iv) Superstition (e.g. "If you break a mirror, you will have bad luck"); (v) Spiritualism (e.g. "It is possible to communicate with the dead"); (vi) Extraordinary Life Forms (e.g. "The Loch Ness monster of Scotland exists"); and (vii) Precognition (e.g. "The horoscope accurately tells a person's future"). All items are rated on a scale from 1 ("Strongly Disagree") to 7 ("Strongly Agree"). The PBS-R has been used in previous research on conspiracy theories (Darwin et al., 2011;Drinkwater et al., 2012) and has been reported to have good testretest reliability for the total score (.92) and for each of the subscales: Traditional Religious Beliefs Levenson's (1973) Multidimensional Locus of Control Scale (LMLoC). LMLoC was used to establish the agencies to which participants tend to attribute control over events. LMLoC consists of 24 Likert scale items scored from "Strongly Disagree" (−3) to "Strongly Agree" (+3). It includes three subscales: (i) Internal Locus of Control (ILoc; e.g. "How many friends I have depends on how nice a person I am"); (ii) Powerful Others (PO; e.g. "Getting what I want requires pleasing those people above me"); and (iii) Chance (C; e.g. "To a great extent my life is controlled by accidental happenings"). In the current sample, the three subscales had good internal consistency (Cronbach's alpha for ILoC = .70, for PO = .76, and for C = .77).
Tendency to perceive patterns in random information. People's tendency to perceive patterns in random information was estimated using the random coin toss simulation of Van Prooijen et al. (2018; using https://www.random.org/). One hundred semi-random coin tosses were created, with 50 head and tail conditions being maintained. Afterwards all tosses were split into 10 sequences, for instance "THHTTTHTTH". Participants were asked to rate on a scale whether the presented sequence was "Completely random" (1) or "Completely Determined" (7). In addition to the 10 questions, participants were also asked to imagine that all prior items are 100 throws of a single coin and using the same scale grade how random or determined the outcome was. The resulting 11 item scales had good internal consistency, Cronbach's alpha = .93. To index the illusory pattern perception, the total score from 11 items was averaged.

Procedure
The advertisement and information sheet for the study informed the participants that the research was about conspiracy theories and included the need to read a scenario and complete questionnaires. First, participants were randomly divided into logic and no-logic groups. The logic group (n = 119) was then presented with 16 reasoning problems before reading the conspiracy theory, while the no-logic group (n = 121) was asked to complete the reasoning items at the end of the study. Both groups were presented with the conspiracy theory vignette and were asked to rate its truthfulness, realism and their agreement with it.
Next, participants were randomly assigned to one of four intervention groups: (i) the control group, wherein after reading the conspiracy theory, participants were given information about what viruses and bacteria are, together with the dates and death tolls for pandemics that have happened in history (taken from Drexler, 2010;and Huremović, 2019), with this information excluding any attributions or causal explanations; (ii) the analytic group, wherein participants were provided with a formal logical analysis of the conspiracy theory that they had read, which outlined the fallacies and inconsistencies within it; (iii) the consequence group, wherein participants received the logical analysis of the theory they had read as well as an additional explanation that the consequences of the theory, even if assumed to be partially true, would not be detrimental; and (iv) the autonomy group, wherein participants were asked to remember a situation in which they felt in complete control and were then asked to write down its description before they read the logical analysis of the conspiracy theory, followed by clear guidelines about behaviour in the situation described in the theory. All intervention conditions are described in detail in Appendix C in the Supplemental Material.
Next, all participants were presented with the same conspiracy theory and asked once again to rate its truthfulness, their agreement with it, and its realism. Afterwards, participants from all groups were invited to fill out the three questionnaires (i.e. the CRT-L, the PBS-R and LMLoC) and were requested to complete the pattern perception task. After completing these questionnaires and tasks, the no-logic group was asked to rate their reasoning skills and complete the 16 reasoning problems. Then, both groups were asked for socio-demographic information. Lastly, all participants were debriefed and thanked for their participation. As in the pilot study, it was stressed that the conspiracy theory being read had been artificially created and was not an accurate representation of current or historical events.

Manipulation checks
Manipulation checks were carried out to ensure that participants did not differ across groups. These manipulation checks were conducted using truth and agreement ratings at the baseline point where participants had seen the conspiracy theory for the first time, with the three experimental conditions as criterion variables and the neutral condition as a reference group. Participants in the reasoning condition, b = .005, [-.33, .35 .63], p = .18, did not differ significantly in their agreement with the conspiracy theory as compared to those in the control condition. A logistic regression with the same criterion variables was carried out for the realism ratings given for the conspiracy theory on its first encounter. This showed that there was no significant difference in realism ratings for the conspiracy theory between those who were in the reasoning condition, b = .30, [-.09, .23], p = .46, in the consequence condition, b = .15, [-.12, .19], p = .72, and in the autonomy condition, b = .63, [-.02, .30], p = .11, relative to the control condition.
A similar analysis was utilised to establish whether participants from the logic or the no-logic groups differed in their ratings of the conspiracy theory when presented for the first time. Those in the no-logic group, who completed the reasoning assessment at the end of the testing session, had significantly lower truth ratings for the theory at the baseline, b = -.29, [-.52, -.05], p = .02. However, there was no significant difference between the two groups for the agreement rating, b = -.002, [-.25, .26], p = .99. Realism ratings for the conspiracy theory also did not differ significantly between the logic and no-logic groups, b = -.22, [-.16, .07], p = .43.

Models
Hierarchical regressions were carried out for each of the three endorsement ratings (as outcome variables) that had been elicited from participants prior to the introduction of interventions. In these hierarchical regression models, Step 1 included demographic information such as age, sex 4 , and level of education. At Step 2, the System 1 versus System 2 indices were added (i.e. CRT-intuitive and CRT-reflective). At Step 3, the reasoning skills overestimation index was added.
Step 4 incorporated all three aspects of locus of control (subscales of LMLoC), and Step 5 added the illusory perception index.
Step 6 included the general belief in paranormal phenomena (total score on the PBS-R). Given that logic and no-logic participants differed in their truth ratings for the presented conspiracy theory depending on when they were asked to complete the 16 reasoning problems, for the truth rating outcome measure there was also a Step 7 in the model in which solving reasoning problems before or after reading the conspiracy theory was added.
The models describing all steps are presented in Appendix D of the Supplemental Material. The model fit at each step is presented in Table 1 and the full models are presented in Table 2.
The results of hierarchical regression for the truth rating as the outcome variable showed that every step of the model was a significant improvement of the model, which was also a good fit to the data at every step. In the full model, however, only three predictors had a significant association with the truth rating of the presented conspiracy theory. The CRT-reflective index and completing reasoning problems after reading the conspiracy theory were negatively associated with rating the presented theory as true, b = -.12, [-.22, -.01], p = .01, and b = -.24, [-.44, -.04], p = .02, respectively. In addition, the total score on the PBS-R was positively associated with the truth rating of the conspiracy theory, b = .37, [.26, .47], p < .001.
Contrary to what was observed with the truth rating, only the full model for the agreement with the presented theory provided a good fit to the data. This finding underscores the importance of the significant positive association between belief in the paranormal and agreement with the conspiracy theory, b = .18, [.04, .31], p < .004, which was the last addition to the model. However, the models for the realism of the presented theory were closer to those for truth ratings than to those for the agreement ratings. All steps provided a good fit to the data, and each step, apart from the fifth, was a significant improvement. In the full model, the total score on the PBS-R had a significant positive association with judging the theory as realistic, b = .53, [.05, .15], p < .01. Despite the CRTreflective index having a p value below .05 for the negative association with rating the conspiracy theory as realistic, the corresponding upper 95% bootstrapped confidence interval was close to zero, suggesting that this effect might be spurious. 1. The model fit at each step relating to the endorsement of the conspiracy theory in terms of truth rating as outcome, agreement rating as outcome and realism rating as outcome.
Although the overall fit for the model explaining the variance in agreement with the conspiracy theory was significant, F(7, 232) = 3.84, p = .001, there were no confirmed effects for the individual criterion variables. As the 95% bootstrapped confidence interval for the precognition subscale included zero, despite the associated p value being lower than .05, this effect was treated as spurious. Conversely, while the overall fit for the realism rating of the conspiracy theory was not statistically significant, X(7) = 8.3984, p = .30, belief in Psi was positively and significantly associated with rating the conspiracy theory as realistic.

Intervention effect
Multiple regressions were carried out on the change in ratings of the conspiracy theory presented on the second occasion versus the first occasion, with the neutral condition as the reference category, and the reasoning condition, consequence condition Table 3. The effect of the acceptance of paranormal beliefs on the endorsement of the conspiracy theory in terms of truth rating as outcome, agreement rating as outcome and realism rating as outcome. and autonomy condition coded as dummy variables. The results are presented in Table 4. Only the consequence intervention was successful in challenging agreement with the presented theory. That is, those participants who were shown the consequence analysis together with the logical analysis of the conspiracy theory, were likely to reduce the degree to which they agreed with the conspiracy theory, b = -.30, [-.57, -.07], p = .04. Although, the overall fit of the model for the change in the agreement rating was not significant, F(3, 236) = 1.48, p = .22, it should be noted that this model was comparing groups rather than identifying associations.

General discussion
The present study showed that both a reduced tendency to engage in System 2 analytic processing as well as an increased tendency to hold paranormal beliefs contributed to people's willingness to endorse a conspiracy theory as being true, which is consistent with previous literature (e.g. Gligorić et al., 2021). The present findings also demonstrated that increased System 2 processing was negatively associated with perceiving a conspiracy theory as having realism. Interestingly, however, increased System 2 reasoning alone was not enough to challenge the extent to which people were willing to agree with a conspiracy theory. In addition, and diverging from prior research (Van Prooijen, 2017), the present results did not yield a negative association between higher educational levels and the endorsement of a conspiracy theory as being true. Given that Van Prooijen's (2007) study also included a measure of System 2 analytic processing in the reported model, one plausible explanation behind the differences in the results between the studies is the prevalence of a high level of education in the present sample, which restricted the range of values for the education variable (i.e. only three participants in our sample had secondary school as their highest achieved degree).
Although there are some reports suggesting the absence of a significant association between educational level and unwarranted beliefs (Browne et al., 2015;Lindeman, 2018), the lack of an association between education level and endorsement of the conspiracy theory in the present study might also be explained through the presence of the effect of the CRT-reflective index and the absence of an effect for the overestimation of reasoning abilities. As argued by Pennycook et al. (2016), the CRT-reflective index does not necessarily correspond to a person's ability to engage in System 2 analytic reasoning but may relate instead to their motivation (i.e. willingness or disposition) to engage in System 2 analytic reasoning. Adopting this assumption suggests that in the present study, many participants might have demonstrated a lack of motivation to apply their analytic reasoning skills, even though they had acquired such skills during their education, and thereby they did not critically evaluate the conspiracy theory before rating it as having some truth to it. This lack of motivation to apply analytic processing could also partially explain the absence of an effect of reasoning skills overestimation on the endorsement of the conspiracy theory in our study. It is possible that neither overconfidence nor an actual lack of analytic reasoning skills played a role because participants' lack of motivation to engage in analytic processing was the key. Another important issue to account for in relation to the present study is that we assessed people's overestimation in their analytic reasoning skills rather than their overestimation and resulting overconfidence in relation to a given topic, which has been shown to increase acceptance of associated conspiracy theories (Vitriol & Marsh, 2018). It is possible that unwarranted confidence in discerning business strategies and epidemiology rather than overconfidence in reasoning skills affects endorsement of conspiracy theories, which our study was not able to address directly.
Some indirect support for this latter assumption, however, perhaps derives from our unexpected finding that those participants who completed the deductive reasoning problems at the end of the study, rated the presented conspiracy theory as less true than those who completed these reasoning problems at the beginning of the study. This result diverged from existing studies, wherein participants were less likely to endorse a conspiracy theory following priming for analytic processing (Bonetto et al., 2018;Swami et al., 2014). However, previous research that has introduced a manipulation to prime or encourage participants to engage in analytic processing has either used a modified scrambled-sentence, word-fluency task with a 5 ￼ in which participants were required to read stimuli presented in a difficult-to-read font (Swami et al., 2014), or has adopted a Resistance to Persuasion scale (Briñol et al., 2004, as cited in Bonetto et al., 2018. In contrast, the current study asked participants to solve 16 logical syllogisms, which arguably requires the application of considerable cognitive effort. As such, it is possible that participants who completed these problems "lost" their motivation to apply their reasoning skills afterwards.

Beliefs in the paranormal
The proposition that the conspiracy theory in the present study was deemed to be true by those who lacked the motivation rather than the skill to reason also reflects the positive association of paranormal beliefs with agreement and realism ratings. Supporting previous research, those who held beliefs in the paranormal in generaland specifically in Psi powers, witchcraft, and superstitionswere more likely to deem the presented conspiracy theory to be true (cf. Darwin et al., 2011;Drinkwater et al., 2012;Dyrendal et al., 2021;Lobato et al., 2014;Rizeq et al., 2020). Similarly, more participants who had general paranormal beliefs and particularly beliefs in Psi powers were found to rate the conspiracy theory as realistic.
Although we did not establish a diagnosis for our participants in relation to personality disorders, their generally high education level suggests that they are unlikely to have acute symptoms of schizotypal personality disorder. This in turn raises a possibility that in addition to schizotypal thinking (Dyrendal et al., 2021), there should be other mediators explaining the relationship between paranormal beliefs and endorsement of conspiracy theories. Interestingly, although in the present study the total score on the PBS-R was also positively associated with agreement with the presented conspiracy theory in addition to their endorsement, no single subscale of the PBS-R had a significant positive effect. This suggests that the general acceptance of some paranormal beliefs creates a vulnerability to agree with a conspiracy theory and corresponds to the proposition that "contaminated mindware" might underpin such effects (Rizeq et al., 2020;Stanovich, 2009Stanovich, , 2011. A further interesting finding in relation to the PBS-R in the present study was that the precognition subscale was found to have a significant negative association with the truth ratings for the presented conspiracy theory. A possible reason behind this unexpected result might relate to the specific nature of precognition beliefs in the context of the specific conspiracy theory employed in the study. In the PBS-R, the precognition subscale corresponds to accepting that certain individuals can predict the future for unexplained reasons (Tobacyk, 2004). Given that this specific belief does not involve malicious intent of the type that is capitalised upon in the presented conspiracy theory, then those who hold the belief might expect that individuals with "precognition" abilities would warn them about those who conspire to harm the public.
Consistent with previous research, perceiving truth in the conspiracy theory in the present study was associated with both reduced System 2 analytic processing and general acceptance of paranormal beliefs (Pennycook et al., 2012;Rizeq et al., 2020). Given that holding such beliefs is a demonstration of consistently not applying critical evaluation to available information, the relationship behind these three factors is unsurprising. However, the lack of a significant effect of education and the reverse effect of the reasoning task, again suggests that what is at play here is a lack of motivation to engage in analytic processing rather than an inability to do so. This explanation of perceiving conspiracy theories as true because of not engaging in System 2 processing contradicts the argument advanced by Van Prooijen (2019) that conspiracy beliefs are underpinned by System 1 processing. Nevertheless, it is the explanation that seems to be most consistent with the present results, whereby the System 1 CRT-intuitive index was unrelated to truth, agreement or realism ratings with respect to the presented conspiracy theory.

Illusory pattern perception
Contrary to expectations deriving from previous research (Van Prooijen et al., 2018), in the present study illusory pattern perception was not associated with the endorsement of the conspiracy theory when the measure was included in a model that also included System 1 and System 2 processing indices. Walker et al.'s (2019) finding demonstrating a relationship between illusory pattern perception and accepting pseudo-profound bullshit, which appeals because of its form, suggests that to mistake random patterns for predetermined ones might well facilitate belief in conspiracy theories not because of their content but because of how they are written. Taking this into account, a specific difference between the study of Van Prooijen et al. (2018) and the current one related to how the conspiracies were presented. Van Prooijen et al.
presented conspiracy theories in the form of statements, whereas in our study the conspiracy theory was described in a small paragraph, which might have decreased its "profoundness". The suggestion that illusory pattern perception might be related more to the form of the theory rather than to its content is further reinforced by the finding that there was no significant association between the chance acceptance subscale of the LMLoC and the measures relating to the belief in the presented conspiracy theory.

Locus of control
The absence of an effect in the present study of the internal locus of control subscale of the LMLoC on any of the measures relating to the belief in the presented conspiracy theory was unexpected. That said, no explicit link between internal or external locus of control and belief in conspiracy theories has been identified in prior research. Consequently, the data suggest that the tendency to attribute responsibility for events to oneself or to others is not associated with belief in the present conspiracy theory. Moreover, contrary to previous research (Douglas et al., 2014;Van Prooijen & Acker, 2015), directly evoking feelings of control in one of the intervention conditions in our study did not change the endorsement of the conspiracy theory when compared to a neutral condition.

Challenging the endorsement of conspiracy theories
The current results relating to intervention approaches demonstrate that neither reasoning alone, nor reasoning and feelings of control, are enough to challenge a person's belief in, or their agreement with, a conspiracy theory. On the one hand, given the discrepancy between the current findings and previous research (Douglas et al., 2014;Orosz et al., 2016;Van Prooijen & Acker, 2015), one explanation for the inconsistent results is the potential lack of effectiveness of the experimental manipulation in the present study. Specifically, the reasoning condition might not have been persuasive enough, and the question asking participants to recall a situation wherein they felt in control might not have functioned successfully to increase feelings of control. On the other hand, the consequence condition did facilitate a lowering of the agreement with the presented conspiracy theory, which suggests that an explanation based on an ineffective experimental manipulation might be too simplistic.
In addition, the neutral condition needs to be considered. In contrast to Orosz et al.'s (2016) study, which used a weather forecast for the control condition, the present research showed participants verified information about mechanisms behind the spread of infections for the control condition. It is possible that reading this information might have encouraged participants to re-evaluate the vignette that they had read as the information was directly relevant. Furthermore, since the consequence condition also included the logical analysis of the conspiracy theory, the results highlight that using formal reasoning alone is not enough to dispel conspiracy theories. Such reasoning also needs to be followed by a demonstration of whyeven with the assumed plausibility of the conspiracy theoryit still does not give rise to detrimental consequences (Van Prooijen & Van Dijk, 2014). It appears that directly addressing the real-life implications of the conspiracy theory provided the necessary addition to push participants beyond reasonable doubt in relation to disagreeing with the conspiracy theory. However, the presence of a significant effect only for the consequence intervention and only for a measure of agreement with the presented theory points to the need to craft anticonspiracy arguments very carefully.
Indeed, given that the consequence intervention is the only intervention in our study that led to a significant reduction in a measure of people's agreement in a conspiracy theory, it is worth speculating further on the reasons for its facilitatory effects. We suggest that a key reason for its efficacy might relate to the fact that it involves both a recognition of the assumptions and concerns that are expressed in a conspiracy theory as well as an explanation of how these assumptions and concerns are inconsequential in the final analysis. In this respect, we note that when debating with a person endorsing a conspiracy theory, then recognising a person's concerns may be a useful and supportive approach. This finding also relates to the reported effectiveness of technique rebuttal in debating with those who apply conspiracy theories (McIntyre, 2021;Schmid & Betsch, 2019). The form in which technique rebuttal is applied is also found to be important, as the dry dispelling of logical fallacies is not enough to challenge the endorsement of conspiracy theories.

Limitations and future research
The present research is not without limitations that need to be carefully considered. Most importantly, the study was not pre-registered and, as such, the results are exploratory and should be confirmed in future research. Second, male and female participants were disproportionately represented in both the pilot study and the main study. However, as the final steps of the hierarchical regression models show, unlike in previous studies indicating that female participants were less likely to endorse COVID-19 conspiracy theories (Cassese et al., 2020), in our study sex did not have a significant effect on the endorsement of conspiracy theories.
Third, the current study used an artificially created conspiracy theory, which limits the inferences that can be drawn regarding the applicability of the identified effects to real-life conspiracy theories. Nevertheless, endorsement of the theory used in the main study was found to be moderately correlated with general conspiracy-theory beliefs in the pilot study. This provides a reasonable basis for the assumption that the findings from the main study can be extrapolated. Although the use of only one conspiracy theory can be seen as another limitation, it was constructed to be generic in nature and to incorporate all markers from the definition by Douglas et al. (2017), namely an explanation of an important world event (a pandemic and new diseases) focusing on the malevolent action of a small group of people (a world elite who want to make profit from people whom they purposefully infect).
The fourth important limitation of the present study relates to the use of three subjective items to measure the endorsement of the conspiracy theory. Although this necessarily questions the reliability of such assessments, it exemplifies that perceiving conspiracy theories as true is not equal to agreeing with them or to judging them as being realistic. As previous research has utilised agreement (Van Prooijen, 2017) and truth rating (Brotherton et al., 2013) seemingly interchangeably, the current study raises the question of whether such equivalence can be assumed.
Another potential limitation of the study was the use of information that is closely related to the topic of conspiracy theories as a control condition. Although participants in the control condition were not exposed to any invalid information, providing information about viruses, bacteria and past pandemics was very relevant to the topic of the conspiracy theory itself. Nevertheless, when a particular conspiracy theory becomes popular, people exposed to it will likely encounter closely related information. Thus, we believe that there is some legitimacy to our use of a neutrally themed yet reliable set of information on the topic of the conspiracy theory as a baseline to assess the effectiveness of our direct attempts to dispel the theory.
We further note that the results of the current study support the suggestion that the CRT-reflective index corresponds more to a measure of the motivation to apply System 2 analytic abilities rather than the possession of such abilities (Pennycook et al., 2016). We suggest that future studies would benefit from focusing further on this motivational aspect of the CRT-reflective index in the specific context relating to the endorsement of conspiracy theories. Similarly, although the present research builds on the previous study of Orosz et al. (2016) in assessing the effectiveness of a targeted intervention against an artificially created conspiracy theory, there needs to be further investigation focused on reducing the endorsement of real-life conspiracy theories. Although priming analytic reasoning (Swami et al., 2014) or "inoculation" before exposure (Bonetto et al., 2018) have been shown to be effective in decreasing belief in conspiracy theories, a more comprehensive approach to dispelling them would be of undeniable use.
Continuing with the applicability theme, research addressing the relationship between the endorsement of conspiracy theories and subsequent intentions to act consistently with them is scarce (Imhoff & Lamberty, 2020;Jolley & Douglas, 2014, 2017. Consequently, the methods that were found here to be effective in challenging the endorsement of conspiracy beliefs need to be tested in terms of their effectiveness in decreasing the likelihood of behaviours associated with such theories.

Conclusion
It has been suggested that belief in conspiracy theories is driven by gullibility and heuristic processing (Van Prooijen, 2019b). However, the current study suggests that instead it is driven by a lack of motivation to apply an analytic approach to the encountered information. Specifically, acceptance of paranormal beliefs creates a vulnerability to conspiracy theories by dulling the motivation to use critical evaluation when considering explanations for important world events. Moreover, when trying to dispel conspiracy theories, the use of pure reason is not enough. Instead, formal logical analysis needs to be complemented by demonstrations that a conspiracy theory is inconsistent with real life, especially with respect to the detrimental consequences that might arise.

Disclosure statement
No potential conflict of interest was reported by the author(s).