Jumping to the wrong conclusions? An investigation of the mechanisms of reasoning errors in delusions

Understanding how people with delusions arrive at false conclusions is central to the refinement of cognitive behavioural interventions. Making hasty decisions based on limited data (‘jumping to conclusions’, JTC) is one potential causal mechanism, but reasoning errors may also result from other processes. In this study, we investigated the correlates of reasoning errors under differing task conditions in 204 participants with schizophrenia spectrum psychosis who completed three probabilistic reasoning tasks. Psychotic symptoms, affect, and IQ were also evaluated. We found that hasty decision makers were more likely to draw false conclusions, but only 37% of their reasoning errors were consistent with the limited data they had gathered. The remainder directly contradicted all the presented evidence. Reasoning errors showed task-dependent associations with IQ, affect, and psychotic symptoms. We conclude that limited data-gathering contributes to false conclusions but is not the only mechanism involved. Delusions may also be maintained by a tendency to disregard evidence. Low IQ and emotional biases may contribute to reasoning errors in more complex situations. Cognitive strategies to reduce reasoning errors should therefore extend beyond encouragement to gather more data, and incorporate interventions focused directly on these difficulties.


Introduction
In cognitive models of psychosis, reasoning errors are hypothesised to contribute to both the onset and the maintenance of delusions (e.g. Garety et al., 2007). People with psychosis are held to be prone to reasoning errors because of the increased frequency of characteristic cognitive biases and information processing difficulties associated with psychosis (Hemsley and Garety, 1986;Freeman, 1999, 2013;Freeman, 2007, So et al., 2010. Psychological interventions designed to help people with delusions therefore target cognitive biases. The aim of the intervention is to modify or compensate for the biases, in order to reduce reasoning errors, and thereby facilitate belief change (e.g. Fowler et al., 1995). However, the effect sizes achieved by these interventions remain small, indicating the need for further development (NICE, 2009). Therapy advances could be informed by progress in two key areas. First, a better understanding of the nature and strength of the relationship between cognitive biases and reasoning errors would inform how much emphasis to place on cognitive biases when intervening with delusions. Second, an investigation of the contribution of other processes to reasoning errors would identify additional foci for intervention.
In this study, we set out to test the association between cognitive biases and reasoning errors, and to investigate the contribution to reasoning errors of other candidate psychological processes. We chose to investigate the 'jumping to conclusions' (JTC) datagathering bias as this is the cognitive bias most robustly associated with psychosis. The bias manifests as the tendency to make hasty decisions with certainty on the basis of little evidence, and is reliably found in approximately half of people with schizophrenia. It is particularly associated with delusions. Around a fifth of the general population also jump to conclusions, and this percentage is increased in the context of high delusional ideation, an at-risk mental state, and in those who have recovered from psychosis Garety et al., 2011;Fine et al., 2007;Peters and Garety, 2006; Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/psychres Peters et al., 2008;Woodward et al., 2009;van Dael et al., 2006;Broome et al., 2007;Freeman et al., 2005). The bias is usually assessed by the probabilistic reasoning paradigm, which involves weighing evidence to make decisions, for example, deciding from which of two jars of coloured beads a particular series of beads is being drawn. Most versions of the task employ neutral and abstract material (such as coloured beads) to minimise the influence of other social, affective and cognitive factors on reasoning. The jars may contain beads in an easily discriminable (e.g. 85:15) ratio, or in proportions that are less readily discriminable (e.g. 60:40). The sequence of draws and the 'correct' response are usually predetermined, allowing manipulation of the degree of ambiguity of the evidence, and its information value. Emotional versions of the task have been developed, using, for example, words describing personality attributes selected from a survey. The JTC bias has been operationally defined as reaching a decision after fewer than three beads or words (e.g. Garety et al., 2005).
New psychological interventions for people with delusions, designed to specifically target cognitive biases, have been demonstrated to reduce the tendency to JTC on probabilistic reasoning tasks, by extending data-gathering. Such interventions lead to small, but clinically meaningful, changes in delusional beliefs and in quality of life (Woodward et al., 2009;Moritz et al., 2011Moritz et al., , 2013Ross et al., 2011;Waller et al., 2011;Garety et al., 2011). The implication is that reasoning errors, which usually act to maintain delusions, are reduced by extending data-gathering, thereby facilitating belief change.
Accounts of probabilistic reasoning in psychosis (White and Mansell, 2009;Moutoussis et al., 2011) identify three key mechanisms contributing to hasty decision making and linking it to reasoning errors: the information processing difficulties associated with acute psychotic symptoms, leading to reliance on salience at the expense of context; neurocognitive deficits or poor task understanding; and affective or motivational processes, such as threat-related processing or impulsivity. Evidence to date suggests a multifactorial model: the JTC bias is not restricted to those with acute psychosis (Garety et al., 1991;Hemsley, 2005;Peters and Garety, 2006;So et al., 2010;Speechley et al., 2010;Balzan et al., 2012), neurocognitive deficits or limited understanding van Dael et al., 2006;Garety et al., 1991Garety et al., , 2005Bentall et al., 2009;Lincoln et al., 2010a;Menon et al., 2006;Mortimer et al., 1996;Dudley et al., 2011;Ormrod et al., 2012;. Emotional content increases the tendency to JTC in all participants, irrespective of the presence of psychosis (Dudley and Over, 2003;Fine et al., 2007;Ellett et al., 2008;So et al., 2008;Lincoln et al., 2010b;Freeman et al., 2006;Colbert and Peters, 2002).
However, although task characteristics appear to be of little significance in the tendency to JTC So et al., 2012), there is evidence for a psychosis-specific differential effect of task difficulty (ratio discriminability, ambiguity of evidence) and of both emotional and perceptual salience on reasoning errors (e.g. Speechley et al., 2010;Lincoln et al., 2010a;Dudley et al., 2011: Warman et al., 2007Ormrod et al., 2012). This in turn suggests that reasoning errors may be influenced by psychological factors other than the tendency to make hasty decisions.
A comprehensive analysis of the factors contributing to reasoning errors, both between participants and between different tasks, should help to clarify the relative contributions of different psychological processes, and thereby increase the effectiveness of the new, brief cognitive interventions aimed at improving reasoning skills in psychosis.

The current study
We set out to investigate, in people with psychosis, the association of reasoning errors with the JTC bias (hasty decisions, based on limited data), positive psychotic symptoms (as an index of a salience-based information processing style), emotional processes and IQ on probabilistic reasoning tasks that differed in ratio discriminability, ambiguity of the evidence, and emotional content. The sample partly overlapped with that of So et al. (2012), but the focus of the two studies was entirely different: a detailed analysis of the process correlates of reasoning errors between tasks has not previously been undertaken.
We considered reasoning errors in two ways: firstly, at the level of the participant, identifying the characteristics of those prone to making reasoning errors; and secondly at the level of the individual response on each task (three per participant), identifying differences in the correlates of errors between tasks.
We tested two hypotheses: (1) the JTC bias will be associated with reasoning errors (2) independently of their association with the JTC bias, reasoning errors will show task-specific associations with the other psychological processes implicated in probabilistic reasoning in people with psychosis. Specifically, we hypothesised that: (i) reasoning errors on the emotional version of the task would be more influenced by affective processes; (ii) cognitive deficits would play a more significant role on harder (high ambiguity, low discriminability) versions of the task; and (iii) on easy (low ambiguity, high discriminability), neutral versions of the task, in the context of reduced influence of cognitive deficits and emotional factors, reasoning errors would be primarily influenced by levels of positive psychotic symptomatology, as an index of the salience-based information processing style associated with acute psychosis.

Participants
The 204 individuals with a recent relapse of psychosis who took part in the current study represented all participants who completed a probabilistic reasoning task in the Psychological Prevention of Relapse in Psychosis Trial (PRP, Garety et al., 2008;ISRCTN83557988). The PRP Trial was a UK-based multicentre randomized controlled trial of cognitive behavioural therapy and family intervention for psychosis, recruiting in four National Health Service Trusts in London and East Anglia. Trial inclusion criteria were: aged between 18 and 65 years; a second or subsequent episode of psychosis starting not more than 3 months before consent to enter the trial; a current diagnosis of non-affective psychosis (schizophrenia, schizoaffective psychosis, delusional disorder), according to ICD-10 criteria and confirmed by SCAN interview conducted by trained raters (Schedules for Clinical Assessment in Neuropsychiatry, WHO, 1992aWHO, , 1992b; and a rating at initial assessment of at least 4 (moderate severity) on at least one positive psychotic symptom in the Positive and Negative Syndrome Scale (PANSS, Kay, 1991). The exclusion criteria comprised: a primary diagnosis of alcohol or substance dependency; organic syndrome or learning disability; inadequate command of English to engage in psychological therapy with an English-speaking therapist; unstable residential arrangements. For the current study, all participants scored on a delusion item of the SCAN (Section 19, ratingZ1) or PANSS (Items 1, 5 or 641) during their trial participation.

Measures
Demographic information (age, self-reported ethnicity, length of illness, gender) was collected from the participant by self-report and corroborated with the clinical record. Length of illness was defined as the time in years since first presenting to services with a schizophrenia spectrum disorder until the point of consent to participate in the current study.
The PANSS comprises 30 items for rating psychotic symptomatology (seven positive; seven negative; and 16 general symptomatology). Each item is rated on a 7-point scale ranging from 1 (absent) to 7 (extreme). Symptoms are rated over the past 7 days. The mean of the delusions items from the positive scale (Item 1 (Delusions), Item 5 (Grandiosity) and Item 6 (Suspiciousness/Persecution)) was used as an index of current delusion severity, rated on the same scale; a score of 4 or more (moderate severity) on any of these scales was taken to indicate the current presence of a delusion. The 5-factor solution to the PANSS (van der Gaag et al., 2006) was used in this study, providing factor scores for positive symptoms; negative symptoms; disorganisation; excitement and emotion (anxiety/depression).

Probabilistic reasoning tasks
We used three variants of the probabilistic reasoning task, one easy emotionally neutral task with beads in a high discriminability (85:15) ratio and unambiguous evidence; a hard neutral task with beads in a low discriminability (60:40) ratio with more ambiguous evidence, and a hard emotional task, with personality descriptors in a 60:40 ratio and ambiguous evidence. Participants were told that they could see as many items as they wished before making a decision and to decide only when they were certain. Instructions were standardised and presented on a computer screen using the wording of Garety et al. (2005). Draws, and the 'correct' decision, were predetermined, as illustrated in Table 1. In the 85:15 task, the first two beads drawn were of a consistent colour and indicative of the overall correct (mostly orange) jar. At no point during the task did the overall pattern of beads indicate the incorrect (mostly black) jar. In the 60:40 beads task, the first bead indicated the overall correct (mostly blue) jar, but was contradicted by the next bead. As the harder task progressed, the pattern of presented beads mostly favoured the correct jar. However, at one decision point the evidence favoured the incorrect jar (at bead three; two red beads and one blue bead presented), and at three further decision points both jars were equally supported by the evidence (beads two, four and six). The emotional task followed an identical pattern of draws to the hard neutral task, with the 'mostly negative' survey designated as correct. Reasoning errors therefore, usually involved choosing the 'mostly positive' survey, a tendency which might be influenced by mood.
Using the standard coding of errors (choosing the jar designated 'incorrect', Garety et al., 2005), half of the guesses made in response to equivocal evidence are coded as correct decisions, as are correct guesses made before any beads are drawn. Conversely, evidence-based decisions made when the evidence supports the 'incorrect' jar are counted as errors. However, in the current study reasoning errors were the focus of attention: we wished to avoid arbitrary labelling of a response as an error, and therefore defined reasoning errors as responses that were not supported by the accumulation of presented evidence (i.e. illogical responses). Thus, on the easy task, choosing the mostly black jar at any point was considered to be a reasoning error, coinciding with the standard coding of errors according to experimentally determined jar. On the hard tasks, choosing the mostly red jar or the positive survey at the first, fifth, or seventh and subsequent draws, the mostly blue jar or the negative survey at the third draw, or either after two, four or six draws was coded as a reasoning error. Decisions made before seeing any bead or word, (i.e. at draw 0) were rated as a reasoning error. Thus, our error coding differed from the standard method for a total of 42 decisions (two on 85:15; 19 on 60:40 beads; 21 on 60:40 emotional, see Table 1). In practice, error frequencies were similar using the two methods (85:15 standard, n¼ 36 errors; this study, n¼ 38 errors; 60:40 standard, n¼ 74 errors; this study, n¼ 80 errors; emotional, n¼ 58 errors, this study, n¼ 53 errors). Repeated measures ANOVA showed no effect of method of error coding (x2: new or standard) on error rates by task (x 3: 85:15, 60:40; emotional) or overall (F values all o 1.7, p values all 4 0.2).
Errors were additionally coded according to their type: 'inconsistent' errors contradicted the accumulated evidence; 'equivocal' errors were made in the context of evidence equally supporting each jar choice. Salient errors were consistent with the current draw, but not indicated by the accumulated evidence and could be either inconsistent (e.g. choosing the black jar on draw 4 (black) in the 85:15 task, after already seeing three orange beads), or equivocal (e.g. choosing the red jar on draw 2 (red) of the 60:40 task, having already seen a blue bead). On the emotional task, errors were also coded by whether or not the positive survey was chosen. Making a decision after fewer than three draws was considered to be a hasty decision, indicative of the JTC bias So et al., 2012).
Responses on each task (three responses per participant) were recorded. Participants were grouped dichotomously according to whether they made a reasoning error on any completed task, and according to whether they made a hasty decision (after fewer than three draws) on any completed task. (Ammons and Ammons, 1962) The Quick Test was used to provide an estimate of current intellectual functioning (IQ). The task was completed only by those whose first language was English, as it is not standardised for respondents answering in a second language. Key: Grey shading='Error'; JTC: Jumping to Conclusions data-gathering bias; a Decisions made without seeing any data.

Procedure
All measures were completed during a baseline assessment; symptom measures always preceded the reasoning task and the IQ measure. Reasoning tasks were always completed in the same order, beginning with the 85:15 task, then the neutral 60:40 task, then the emotional 60:40 task. Full ethical approval was obtained prior to the onset of the study, and all participants gave informed consent (South East National Research Ethics Service Committee ref. 01/1/14).

Analysis
Analyses were completed using SPSS version 20 (IBM, 2011) and STATA 12 (Statacorp, 2011). All participants completed the 85:15 task, 10 participants did not complete one or both of the other two tasks (n¼ 4 60:40; n¼ 9 emotional). Participants with any missing data were excluded from relevant analyses, and n's reported. The clinical and demographic characteristics of participants completing a probabilistic reasoning task (and therefore included in the current study) were compared to those who did not complete a probabilistic reasoning task, using independent sample t-tests and Chi-square tests. For the participant level analyses of errors, two groups were formed according to the dichotomous rating of whether or not the participant made any reasoning errors (1 ¼ made a reasoning error on at least one task; 0¼no reasoning errors on any completed task). For the response level analyses, decisions were coded dichotomously according to whether or not they were an error (1 ¼error; 0¼ not an error).
For the main analyses, we employed two series of logistic regression analyses to investigate our hypotheses. For the first set of analyses, at participant level, the tendency to make reasoning errors was the dependent variable, and a dichotomised rating of JTC was the predictor variable (1 ¼JTC on at least one task; 0¼ no JTC on any task). Demographic and clinical variables (age, self-reported ethnicity (coded as White/Other), length of illness, gender), PANSS factor scores, IQ, and delusion status were controlled.
The second hypothesis concerned differences in responses between tasks and therefore required three separate repeated measures analyses, each focusing on one of the hypothesised predictors of reasoning errors: IQ, PANSS Positive or PANSS Emotion. The dependent variable in all analyses was whether or not a reasoning error was made. Task type formed a within subjects variable with three levels, taking the easy neutral task (85:15 beads) as the reference category, and comparing firstly to the 60:40 task (Task 2) then the emotional task (Task 3). The independent variable of primary interest in each analysis was the interaction between the relevant predictor and task type, tested using an interaction term (predictor x task type), controlling for predictor, task type, JTC, and demographic variables. We checked for associations between JTC and predictor variables; none were found (ORs all equal to 1.0, p values 40.3). Analyses were also repeated excluding JTC and demographic variables, with identical results, therefore only results from the planned, fully controlled analyses are reported. Post-hoc correlational analyses examined the variation in reasoning errors with speed of decision making, and in delusion severity according to error type, controlling for the tendency to make hasty decisions.

Pattern and correlates of reasoning errors at participant level across all tasks
Of all participants, half made at least one reasoning error (n ¼106/204, 52%). Of these, 76 (72%) also jumped to conclusions.
Of those not making any reasoning errors (n ¼98), 45 (46%) jumped to conclusions. Table 2 shows the demographic and clinical characteristics of those making and not making reasoning errors. Binary logistic regression showed that the tendency to JTC was a significant predictor of the tendency to make reasoning errors, with those showing the JTC bias being more than three times as likely to make an error as those not jumping to conclusions (OR¼ 3.2, 95% CI 1.6 to 6.1, p ¼0.001), irrespective of controlling for PANSS factor scores, demographic variables and IQ. Those making reasoning errors did not differ from those not making reasoning errors on any other demographic or clinical variable, irrespective of controlling for the tendency to JTC (ORs range from 0.6 to 1.0; t-values o1.8; χ 2 values o1.6; p values o0.05). The pattern was the same irrespective of controlling for current delusion status. Those with a current delusion were almost four times as likely to make an error than those without (OR ¼3.8, 95% CI ¼ 1.2 to 11.9, p o0.05) ( Table 3). Table 1 shows the frequencies of each response by draw and task. Just over a quarter of all responses (171/599) were reasoning errors, and almost half of all responses were hasty (255/599), irrespective of whether or not they were correct. Over half of all reasoning errors (96/171, 56%) were inconsistent (made in direct contradiction to the evidence presented; e.g. choosing the mostly  (Kay, 1991); ICD¼ International Classification of Disease (WHO, 1992a(WHO, , 1992b.

Pattern and correlates of reasoning errors at response/task level
black jar, having seen only orange beads). The remainder were made in the context of no data (13/171, 8%), or equivocal data (62/ 171, 36%). Just over a fifth of reasoning errors (38/171, 22%) were attributable to salience, almost all in the context of equivocal data (n ¼35).
We examined the variation in the association of reasoning errors with each potential psychological mechanism between tasks, using repeated measures binary logistic regression, with the 85:15 task as the reference category, compared firstly to the 60:40, then to the emotional task. As hypothesised, there were significant interactions between task type and IQ, PANSS Positive and PANSS Emotion, independently of controlling for JTC and demographic variables (Table 4). JTC remained a significant predictor of reasoning errors in each repeated measures analysis (ORs ranged from 2.1 to 2.4, p values r0.001). Differences were predominantly between the easy and hard neutral tasks. Reasoning errors were more common on the hard neutral task, irrespective of whether they were hasty or not, but not on the emotional task, which did not differ from the easy neutral task. Mean scores showed that reasoning errors on the easy neutral task (which were all inconsistent) were associated with higher levels of PANSS Positive than on the hard neutral task, while reasoning errors on the latter were associated with lower IQ and lower PANSS Emotion scores (Table 5). Frequency and severity of delusions were highest for the hasty errors group on the 85:15 task, and the hasty correct group (those choosing the negative survey) on the emotional task.
Post-hoc correlations, controlling for the tendency to JTC, were used to identify the pattern of associations with error type. On the 85:15 task, hasty inconsistent errors were associated with higher IQ (r¼ 0.2, p¼0.005, n¼ 187), positive symptoms (r¼0.2, p¼0.003, n¼ 201) and delusion severity (r¼0.2, p¼ 0.01, n¼201). On the 60:40 task, hasty inconsistent errors were associated with low levels of IQ (r¼ À0.2, p¼0.03, n¼184) and negative affect (r¼0.2, p¼ 0.02, n¼197), but not delusion severity. On the emotional task, both equivocal and salient positive errors were associated with positive symptoms and delusion severity (r values all À0.2, p valuesr0.02, n¼192), irrespective of whether they were hasty or not; salient positive errors were also associated with lower levels of negative affect (r¼ À0.14, po0.05, n¼192). No other error type showed an association with any of the psychological mechanism variables, or with delusion severity (r valueso0.15, p values40.05). The rate of reasoning errors clearly decreased as the number of Table 3 Frequency of errors by type for each probabilistic reasoning task at response level (n¼599 responses across n¼ 204 participants).

Discussion
We set out to investigate the pattern and correlates of reasoning errors under differing task conditions, with the aim of testing the association between the jumping to conclusions (JTC) bias (both the tendency to make hasty decisions and for those decisions to be based on limited data) and reasoning errors, and of identifying other processes contributing to reasoning errors. The purpose was to inform the further development of targeted cognitive therapy interventions designed to facilitate helpful belief change for people with delusions.
We found strong associations between the tendency to make reasoning errors and to make hasty decisions: the rate of reasoning errors clearly decreased as the number of draws considered increased. These findings support current models of intervention, and suggest that helping people to gather more information should reduce reasoning errors, and consequently facilitate changes in delusions.
However, reasoning errors were associated with current delusions, irrespective of making hasty decisions and half of the reasoning errors were not hasty, suggesting additional routes by which reasoning errors may arise, and influence delusions. Two thirds of hasty errors, and around half of the total reasoning errors were inconsistent, directly contradicting all the available evidence (for example, seeing one or more orange beads and then deciding on the black jar). Although this tendency has been noted previously (Lincoln et al., 2010a), the error rate was much higher in our group. This was not the result of our particular methods of coding (rates were slightly higher using standard coding, see Table 1), but may reflect the clinical presentation of our participants, who had a history of delusions, a recent experience of relapse, and continuing positive symptoms.
The tendency to make inconsistent errors was most evident on the easy neutral task, on which some participants made contradictory decisions despite successive pieces of evidence consistently indicating the other jar. Participants showing this tendency had high levels of delusions, consistent with a maintenance role. They also had high IQ scores, suggesting the behaviour did not result from lack of ability or understanding. Although the tendency was associated with positive symptoms, contrary to hypothesis, salience did not appear to be the mechanism underlying these errors. Rather, our findings suggest other positive symptom related processes, such as suspiciousness of presented evidence, or overconfidence in guesses or 'gut feelings' (Andreou et al., 2014;Köther et al., 2012;Freeman et al., 2012;Moritz and Woodward, 2006). On the hard neutral task, in contrast, lower IQ was associated with reasoning errors, suggesting that poor task understanding, or difficulties with the nature of evidence-based reasoning may exert more influence when the task is harder (less discriminable ratio; more ambiguous evidence). These errors were primarily inconsistent, and were also associated with lower levels of negative affect, but not with delusions.
Error rates were lower on the emotional task, despite the same levels of difficulty as the hard neutral task, raising the possibility that negative affective biases may reduce the likelihood of choosing the positive survey, irrespective of the evidence. Consistent with this, those making hasty but correct decisions on the emotional task showed higher levels of delusion severity; while choosing the positive survey, when presented with 50:50 evidence or a positive word, was associated with lower levels of delusions, and, for salient errors, with less negative affect.
The tendency to make reasoning errors was not associated with the PANSS Excitement factor score, arguing against a role for impulsivity. Salience-based reasoning appeared to account for around a fifth of errors, across all tasks, rather than just on the easy neutral task, but other than the content specific association with delusions, was not associated with any of the predictor variables, or with delusion severity.

Clinical implications
The findings have implications for cognitive behavioural interventions for people with delusions. Around half of the reasoning errors resulted from a tendency to make a decision which directly Table 5 Mean differences between tasks in the association of clinical variables with the tendencies to jump to conclusions and to make reasoning errors. contradicted the presented material, even in the face of repeated evidence to the contrary. This seemed to be particularly characteristic of people with positive symptoms on the easy task. It was not associated with lower IQ. The findings raise the possibility that this tendency sometimes contributes to delusion maintenance. Simply accruing more evidence may not help these people, and an alternative strategy to data-gathering may be needed. Working directly on beliefs about the nature of evidence and reasoning, and the meaning of a particular piece of evidence might be useful. A different subset of participants made reasoning errors on the hard task, and had lower IQ scores. General remedial strategies, in addition to addressing data-gathering, may be useful for these people, who may be making reasoning errors simply due to lack of understanding, or poor concentration. This tendency was not associated with delusions. Affective biases may have acted to reduce errors on the emotional task in this study, and could also therefore influence delusional reasoning.

Limitations
The study has a number of limitations. Participants were selected not only for a research study, but also for ability to complete a particular task, and may not, therefore, be representative of all people with similar diagnoses. The measure of IQ is limited, and replication with a more comprehensive assessment of neuropsychological functioning would allow stronger conclusions to be drawn. We carried out multiple comparisons, and while the association of JTC with reasoning errors is well-established, the other findings require replication. Task understanding was not formally assessed, and although apparently 'illogical' decisions were associated with high IQ, it remains possible that participants had simply misunderstood the nature of the task. However, poor task understanding is not always readily distinguishable from a 'psychotic-like' performance, which, by definition implies a limited influence of current evidence on decision-making. We employed a novel coding of errors, and, while we considered this coding to have more validity (in that it was directly linked to what each participant had seen, rather than experimentally determined), results were in fact almost identical to those that would have been found using standard coding of errors, and suggests that recoding may not be necessary. For the emotional task in particular, standard coding preserves the link with content and may therefore be superior. Associations were cross-sectional, so may not be causal, and the term 'predictor' is used in a statistical sense only. A longitudinal study of changes in delusions in relation to reasoning errors would help in testing causal relationships. Our cross-sectional design did not permit experimental manipulation of task parameters, such as ratio discriminability and ambiguity of the evidence presented, which could also clarify causal relationships. Nevertheless, the tendency to make contradictory decisions is clear, and has not previously been investigated in the literature.

Conclusions
Reasoning errors are associated with hasty decision making, but only the minority of reasoning errors result from being misled by limited data. A high proportion of reasoning errors directly contradicted the evidence presented, particularly on the easy task, where this tendency was associated with positive symptoms and higher functioning. In contrast, reasoning errors on the hard task were associated with low IQ. Direct interventions focused on the meaning and use of evidence, and not just on gathering more data, may be a helpful addition to cognitive behavioural reasoning interventions.