Depolarizing American voters: Democrats and Republicans are equally susceptible to false attitude feedback

American politics is becoming increasingly polarized, which biases decision-making and reduces open-minded debate. In two experiments, we demonstrate that despite this polarization, a simple manipulation can make people express and endorse less polarized views about competing political candidates. In Study 1, we approached 136 participants at the first 2016 presidential debate and on the streets of New York City. Participants completed a survey evaluating Hillary Clinton and Donald Trump on various personality traits; 72% gave responses favoring a single candidate. We then covertly manipulated their surveys so that the majority of their responses became moderate instead. Participants only noticed and corrected a few of these manipulations. When asked to explain their responses, 94% accepted the manipulated responses as their own and rationalized this neutral position accordingly, even though they reported more polarized views moments earlier. In Study 2, we replicated the experiment online with a more politically diverse sample of 498 participants. Both Clinton and Trump supporters showed nearly identical rates of acceptance and rationalization of their manipulated-to-neutral positions. These studies demonstrate how false feedback can powerfully shape the expression of political views. More generally, our findings reveal the potential for open-minded discussion even in a fundamentally divided political climate.


Introduction
The political landscape in the United States is becoming increasingly polarized [1][2][3][4]. Studies have shown that this polarization biases political decisions as well as reduces informative and critical thinking. For example, people tend to automatically support policy issues proposed by their own party and reject those coming from the opposition [5]. Even during effortful deliberation, people usually side with their own party's stance on various issues [6]. Furthermore, polarization strongly correlates with confirmation bias: polarized individuals are more inclined to seek and interpret information to confirm their present ideas about the world [7]. PLOS  moderate responses on top of the participants' original responses (Fig 1B), then we handed the questionnaire back to them. It now appeared as if the participants had given primarily moderate responses to the questions. This replacement was inconspicuous and took only a few seconds to complete. In the control group, we performed a similar procedure but without manipulating any of the responses. We then asked participants in the control group to explain the reasoning behind approximately three arbitrary non-manipulated responses; in the experimental group, we asked about three manipulated ones. The experimenter would ask, for In the experimental group, while participants rated the candidates, we discreetly looked at their ratings and filled out an identical slip of paper with the majority of their polarized ratings shifted closer to the midpoint (A). When the participants finished the survey, we briefly took it and covertly pasted our paper slip with the manipulated moderate responses on top of the participants' original responses (B). We then asked the participants to explain some of their (manipulated) ratings. Next, we overlaid a transparent sheet that categorized their ratings into: favoring Trump, favoring Clinton, or "open-minded" (i.e., neutral). Together with the participants, we tallied their ratings and asked them to explain their overall score. All participants in the experimental group now had a primarily open-minded score (C). The participants in the control group did not receive any manipulations and instead explained their own original score. (Politician photographs from Wikimedia Commons).
https://doi.org/10.1371/journal.pone.0226799.g001 example, "Why do you think that Trump is more analytic?". If the participants hesitated, or behaved as if something were wrong, the experimenter would inform them that they could change their response (operationalized as correction) and instead explain their reasoning behind that response. We tape-recorded the reasons participants gave to each of these responses. Next, we told the participants we would calculate a summary score of their responses using a transparent overlay that segmented the scales into three categories: a clear preference for Trump, a clear preference for Clinton, or "open-minded" in the middle 30% of the scale ( Fig  1C). Together with the participants, we tallied their 12 responses into the three categories. Using this segmentation rule, participants received summary feedback that their score had a majority of either Trump, Clinton, or open-minded responses. We then showed the participants their overall score and asked them, "Most of your responses were in the open-minded (or Clinton, or Trump) category-do you know why this would be?" We tape-recorded as participants explained their overall score. (Two participants did not want their voices recorded and were thus excluded from this measure.) Two independent judges later assessed whether participants justified the manipulated position. In particular, the judges rated whether participants provided clear justifications (e.g., "My parents raised me to be open-minded"), versus whether they either rejected the score (e.g., "I don't think I'm that open-minded") or did not justify it at all (e.g., "I don't know"). We conservatively defined justification as occurring only when both judges agreed that the participant justified the score; the judges agreed on 75% of their ratings.
Having discussed their aggregate score, we next asked participants to rate the candidates' competency ("How competent are these candidates as leaders?"), to see if the manipulation and confabulation would affect these more general attitudes. Here, each candidate had a visual analog scale ranging from "Extremely incompetent" to "Extremely competent". We then debriefed the participants, asked who they were planning to vote for, and finally asked for consent to use their data.

Results
Correction of the false feedback. In the experimental group, we manipulated an average of 8.53 responses closer to the midpoint of the scale, with 3.55 of these moving from supporting one candidate to being in the open-minded category. We then asked participants to explain approximately 3 (M = 3.1, SD = 0.49) of these manipulated responses, and they only corrected 12% (95% CI [8%, 17%]) of these. Overall, 28% of the participants corrected one manipulation and only 4% corrected two. None corrected more than two of the discussed responses. The participants who made the corrections said that they had either made an error or changed their mind about the rating. No participants expressed any suspicion that their responses had been manipulated, even when asked after the study if they had noticed anything unusual. Accordingly, the participants accepted the large majority of the manipulated responses as their own. After accepting the manipulated responses, participants often gave elaborate arguments for them. For example, one participant marked his response to the experienced item as 94% on the Clinton side of the scale, which we manipulated to a more neutral position closer to the middle of the scale (59%). When asked to explain the latter rating, he said, "I think they're both experienced in their field. Trump is a really successful businessman . . . And then, Hillary has had a lot of years [of] practice in office. So I . . . feel like they both are really experienced." Another participant originally rated diplomatic as 73% on the Clinton side, which we changed to more neutral (57%). She stated, "Hillary has been in the political scene for a very long time, but I think also Trump has a diplomatic aspect to him just because he is very passionate . . . about the country." Participants thus offered arguments for moderate positions even though they had originally reported more polarized opinions just moments earlier.
Manipulation, acceptance, and justification of the aggregate survey score. Our false feedback made it appear as if participants were overall less polarized. In the experimental group, participants originally had an average of 4.32 (95% CI [3.88, 4.75]) neutral responses out of 12; after the manipulation and correction phase, the participants were given the feedback that they had 7.87 [7.52, 8.20] of them (Fig 2A). Looking only at participants that had an overall polarized score (i.e. a majority of responses favoring a single candidate), they had 3.20 [2.79, 3.59] neutral responses before the manipulation and 7.27 [6.70, 7.77] after it. Originally, 25% [15%, 37%] of participants in the experimental group had a majority of neutral responses, and the false feedback suggested that almost all of them (97%) did. The control group experienced no manipulation, and 30% [19%, 45%] of them had primarily neutral responses. As expected, in the control group, the large majority of participants (90% [77%, 96%]) verbally justified their own original views, whether neutral or polarized ( Fig 2B).
Surprisingly, in the manipulation group, a similar number of participants justified their manipulated views which they did not hold moments earlier (94% [84%, 98%]). For example, one participant heavily favored Trump; after the false feedback about open-mindedness, he claimed, "I feel like Clinton and Trump are both in the middle and I don't really stand for either of them." Another participant who initially favored Clinton stated, "I guess I fall somewhere in the middle-I'd like to think I'm a little moderate. . . . I think at this point it's important to be open-minded." Others discussed balancing the strengths and weaknesses of both candidates: "In terms of being decisive, Trump is more exact and confident in his decisions, so that could be viewed as being decisive. But then Hillary has a track record in which she's changed her mind about a lot of issues, but that's kind of like her educating herself and having developed thought. So that's two different ways of looking at it." Competency rating. At the end of the experiment, we asked participants to evaluate both candidates' competence as leaders. The average absolute difference in competency ratings between the candidates was 48.37 [39.39, 56.04] in the control group and 53.45 [47.03, 59.82] in the experimental group. Confirmatory tests showed that these differences did not vary by group (t(120) = 0.95, p = .345), nor did individual ratings for Clinton (Wilcoxon-Mann-Whitney Z = -.400, p = .691) or Trump (Z = .599, p = .550). This indicates that while participants in the experimental group often endorsed and rationalized their seeming open-mindedness, the manipulation did not affect their overall candidate judgments.

Summary of Experiment 1
We found that participants rarely detected when their evaluations of the two presidential candidates had been manipulated into a more "open-minded" position. Instead, they accepted the altered responses as their own and offered unequivocal justifications for them. In the end, this made them endorse a substantially more neutral position compared to their original score. This finding builds upon and supports previous studies exploring false feedback and political attitudes [28][29][30]. However, choice blindness had never been applied to study depolarization of candidate evaluations during an American election.

Experiment 2
One major caveat of Experiment 1 is that, due to the location of the first presidential debate in New York, our sample was heavily skewed towards the Democratic Party. Looking at the overall tally of the responses for all participants, 85% had more responses favoring Clinton and only 11% favored Trump. This was further reflected in the general competency rating: 89% of participants thought Clinton was more competent and planned to vote for her. Typically, we would not be concerned with this limitation, as we have no prior reason to expect that Republican supporters would behave differently from Democrats. Choice blindness studies generally have given few indications that individual differences are key to explaining the effect. However, two factors may make the present situation unique. First, the stakes are considerably higher, as research on political attitudes is often weaponized and wielded in the public debate on polarization. Second, and more important, studies on potential individual differences between liberals and conservatives have become a hotbed of activity, with many contentious results and speculative interpretations. A choice blindness study with participants from the full political spectrum could provide a valuable contribution to this debate. Thus, we decided to run a second experiment with a larger and more representative sample.
In the ongoing chase for dissimilarities in personality and cognitive processing between liberals and conservatives, there is some evidence that personality might differ between them. In the popular Big Five personality inventory, liberals score higher on openness to experience whereas conservatives score higher on conscientiousness [33][34]. When it comes to universal values, people on the left tend to value universalism and benevolence, whereas people on the right tend to value achievement and tradition [35]. Researchers have also underlined differences in moral reasoning; liberals tend to favor particular foundations (e.g., harm/care, fairness/reciprocity) whereas conservatives put more emphasis on others (e.g., authority/respect [36][37]). Several studies have also found differences in thinking styles: conservatives have been seen as more intuitive and heuristic, whereas liberals have been seen as more analytic and systematic (e.g. [38][39]). In line with this, two studies found indications that "bullshit receptivity"-the propensity to believe statements independent of their truth-was higher for conservatives [40][41].
On the other hand, it is unclear how these findings translate to the realm of polarization, as studies of political cognitive processing seem to indicate that conservatives and liberals are similarly sensitive to various biases. For example, Frimer, Skitka and Motyl [42] found that the opposing camps were equally averse to statements that did not support their political position. Even when participants had a chance to earn money by simply reading counter-ideological statements, about two thirds of both liberals and conservatives declined to do so, indicating that there is a considerable mental "cost" involved in exposing oneself to opposing information and arguments. Furthermore, in a meta-analysis of 43 studies investigating various biases, the researchers found almost identical levels of partisan bias and confirmation bias for both liberals and conservatives [43]. Similarly, the propensity to believe fake news has also been found to rely on factors such as analytic thinking and prior exposure, rather than partisanship [44][45].
It remains unclear whether liberals and conservatives would differ on a novel decision measure like choice blindness, which involves a combination of false feedback and potential confabulation not used in any of the studies previously discussed. Susceptibility to false feedback has not systematically been linked to ideology, and political choice blindness studies conducted in Sweden and Argentina have yielded mixed results (see [27][28][29][30] for details). However, the two-party electoral system in the United States, fueled by higher levels of polarization, is an ideal domain to explore this research question. Thus, in Experiment 2, we aimed to replicate Experiment 1 testing both liberals and conservatives. To accomplish this, we designed an online version of the first experiment in order to reach a larger and more representative population.

Method
Participants. Experiment 2 took place a few days before the general election being held on November 8, 2016. Participants were 498 (60% male) American citizens with an average age of 31.1 years (SD = 10.1). They were recruited through the online survey platform Prolific Academic [46] and asked to participate in a political survey. Participants were randomly assigned to either the experimental condition (n = 405) or the control condition (n = 93). The experiments ran on the software Xperiment version 2 [47]. Participants received $2.50 USD as compensation. The study was approved by the Lund University Ethics Board, D.nr. 2016-1046.
Materials and procedure. Experiment 2 followed the same general design and procedure as Experiment 1. The participants completed a 12-item survey and were given a chance to change their responses. They then received a summary score giving them feedback about their level of open-mindedness. The survey consisted of the same leadership traits as used in Experiment 1 (e.g., analytic, trustworthy). At the start, all items were presented as a randomized list on the same page, with continuous scales ranging between Clinton and Trump (Fig 3). Rather than using a pen and paper as in Experiment 1, the participants used their mouse to draw an 'X' on the scale where it best represented their attitude towards each item. After the participants had answered all 12 items, they received the following cover story and instructions: "Researchers have found that people sometimes are influenced by the order in which the questions are asked. Therefore, we would like you to take a second look at your answers". They were then presented with the items and their responses again, but in a different order, and asked to verify or change their previous responses. They were informed that they could change any response by clicking 'edit' and drawing a new 'X'. The items were presented one at a time, with the other items blurred.
All participants in the experimental condition were given false feedback regarding 5 of their 12 responses. The manipulation mechanism was as follows: select the first five responses at the extremes of the scales (i.e. between 0% and 35% or 65% and 100%), and move them to a random position within the middle 30%. Should a participant have fewer than five responses outside of the middle 30%, the items farthest from the midpoint would be moved closer (by a random amount) towards the midpoint. Thus, all participants received five manipulations shifting their original responses closer to a more open-minded position.
As in Experiment 1, participants then received a summary score showing the list of all 12 items as well as their responses and their associated categories (i.e. Trump, open-minded, Clinton). The participants' degree of open-mindedness was also described in text: "judging by your score, you have a.  between Clinton and Trump. Finally, the participants were debriefed and asked for their data to be used for research purposes.

Results
Analysis. In Experiment 2, we did not explicitly ask participants who they were going to vote for in the election. Instead, we based their candidate support on their original aggregate survey score and categorized the participants as either Clinton supporters or Trump supporters using a simple majority rule. Participants with a majority of responses favoring Clinton were categorized as Clinton supporters, participants with a majority favoring Trump were Trump supporters, and participants with a majority of "open-minded" responses were categorized as open-minded. Following this rule, the sample consisted of 234 Clinton supporters, 75 Trump supporters, 147 open-minded, and 42 ties in which no category has a majority. To further corroborate this classification, we compared how Clinton and Trump supporters answered the favorability question ("How would you compare the two candidates?"), with a scale ranging from Trump (0) to Clinton (100). As expected, the two groups differed in their ratings (  [1.15, 1.77]) compared to participants favoring a specific candidate (Wilcoxon-Mann-Whitney Z = 3.66, p < .001). However, this is probably best explained by the fact that the manipulation seemed less extreme since they were already more neutral.
Manipulation, acceptance, and justification of the aggregate survey score. Originally, the participants had on average 4.06 [3.80, 4.33] neutral responses; after being exposed to and correcting the manipulations, they had 6.71 [6.37, 7.04] neutral responses (Fig 4A). Importantly, both Clinton

Discussion
There is an ongoing quest to create a less polarized and more open-minded political climate in the United States [2,[23][24][25]. We believe this to be an important effort for several reasons. Studies show that polarization can bias information processing and decision making in detrimental ways [5][6]48]. As a result, it often leads to fear, anger, and animosity towards the opposition [1,[9][10]. Polarization is also associated with dogmatic intolerance, which in turn increases the propensity to behave antisocially and to deny free speech [49]. Furthermore, polarization erodes central parts of civic society, such as trust in the government and media [50]. However, for a depolarization movement to be effective, we need to advance our theories on political attitude change and better understand the mechanisms underlying depolarization.
To contribute to this effort, we tested the choice blindness paradigm [26] with American voters just before the 2016 American general election. Our aim was to investigate whether participants could become less polarized in their political views. Study 1 was conducted during the week of the first presidential debate; Study 2 was conducted online with a larger and more representative sample. Participants responded to a survey comparing Hillary Clinton and Donald Trump on various leadership traits. In both studies, the participants in our sample were clearly polarized when entering the study. Participants that favored either of the candidates had on average only 2 to 3 "open-minded" responses out of 12, defined by a response in the middle 30% of the visual analog scales. Participants then received false feedback about their responses: we nearly doubled the number of items that participants had in the openminded category. Only a few of these manipulations were detected and corrected, which resulted in an overall score that made it appear as if the participants were more open-minded in their views towards the candidates. When asked to explain their score, the great majority of the participants accepted and justified their apparent open-mindedness, even though they had reported more polarized views moments earlier.

Supporters of Clinton and Trump are similarly susceptible to false feedback
In Experiment 2, both Clinton and Trump supporters behaved similarly on the experimental measures: they had similar correction rates to the choice blindness manipulations and justified their open-minded score to similar degrees. This is the first study we are aware of that demonstrates that liberals and conservatives are equally susceptible to false feedback about their own attitudes. Given previous findings that acceptance and justification of false survey feedback can lead to lasting changes in political attitudes [30], we see the lack of difference between Trump supporters and Clinton supporters as contributing to the ongoing research on the psychology of ideology. So far, this line of research indicates that liberals and conservatives are different in some aspects, such as personality [33], values [35][36], and thinking styles [38][39]. However, they are both similarly susceptible to cognitive biases [42][43]. Our findings show that choice blindness applies equally to conservatives and liberals. More generally, choice blindness offers a useful tool to test how liberals and conservatives reason-or rationalizewhen presented with false information.

Choice blindness as a method to study depolarization
The current study was not intended as a practical method to influence voters but rather as a novel investigation of experimental depolarization in the political domain. We find that giving people false feedback can be an effective way to, at least momentarily, make them perceive themselves as more open towards competing candidates. This shows that even deeply held beliefs depend on situational factors and can be flexible under certain circumstances. From a theoretical perspective, we believe that participants interpret their own behavior-in this case their survey responses-and infer the reasons behind these responses [51][52][53][54]. Choice blindness could therefore be useful to study the depolarization of extreme views. For example, we could measure how susceptibility to choice blindness and confabulation are affected by the direction of the manipulation, such as going from polarized to moderate, or vice versa. This could help us understand whether being moderate or undecided is a distinct pole of its own. If so, we could explore whether these moderate views are more or less susceptible to false information. Here, the framing of moderate views may play an important role. In our studies, participants received positive false feedback about their survey responses. Instead of suggesting to people that they are open-minded, we might have found different results if participants had been told that they were "wishy-washy", "flip-flopping", "uncertain", "centrist", or even "moderate". Future work could examine how participants behave when they are given false negative or more neutral feedback as well.
The effectiveness of choice blindness in the political domain distinguishes it from many other forms of persuasion, such as perspective-taking [55][56]. In a recent study, Catapano and colleagues [57] found that such methods are less effective for deep-seated attitudes, such as those relating to politics. In fact, imagining the perspectives of out-group members can even backfire and hinder subsequent attitude change. This could partially be explained by the fact that in those paradigms, participants are fully aware that the perspective they consider is not their own and that the arguments they express are hypothetical. In choice blindness experiments, however, participants often believe that the response they are asked to explain reflects their own true attitude.

Limitations and future studies
In Experiment 1, only 12% of all manipulations were corrected, but in Experiment 2, 41% of them were. The reasons behind this difference are difficult to isolate given the variation in design between the two studies (such as the number of manipulations, the instructions for revisiting their responses, and verbal versus written explanations). One potential explanation is the plausibility of the manipulation. In Experiment 1, the manipulations were performed using a magic trick, which is extremely improbable in the context of a typical political opinion survey. Likely none of the participants had ever filled out a pen-and-paper survey that changed seconds later. Thus, if the participants lack perfect access to their own attitudes (or if political attitudes are not stored for us to access; [58][59]), then the manipulated survey responses ought to function as a prime source of evidence about their own attitudes [51][52]. The (presumably non-conscious) inference may look something like: "I wrote these responses, so either they must be my true attitudes, or else I made several large errors". So, if people see themselves as competent at answering a simple questionnaire, making a series of large errors would seem less plausible. In contrast, in Experiment 2, even though we attempted to replicate the general procedure of the original trick, participants were faced with a far less magical procedure. People are familiar with malfunctioning computer programs and websites, and thus our participants would have had little difficulty in concluding that there may have simply been a software error when saving their responses that needs correcting.
Another explanation might be the difference between verbally explaining versus silently revising the manipulations. While participants in Experiment 2 were also confronted with the manipulations, they did not have to engage in the mental task of having to recall or generate arguments for them. On the face of it, one might expect this additional reasoning process to generate more corrections, presumably by helping participants think more deeply about the issue and discovering that they do not agree with the manipulated position. However, if deliberation serves not as attitudinal fact-checking but as a way for participants to further commit to and defend their own ostensible attitudes, the reasoning process might lead to fewer corrections [53][54]. A third explanation could be simply that Experiment 2 was conducted closer to the election compared to Experiment 1, and that a larger proportion of the participants in Experiment 2 had firmly decided who they would vote for. Finally, it could also have been that the cover story in Experiment 2-telling participants to check their responses in case they had been affected by presentation order-may have primed participants be more attentive and to search for inconsistencies.
Prior to the current study, choice blindness had only been used to study what might be called "repolarization"-for example by shifting people from agreeing to disagreeing with a statement. Here, for the first time, we show that it is possible to use the same methodology to depolarize people, by making them adopt the idea that they are more "open-minded".
In future studies, we could also explore more global attitude shifts. In the two experiments presented here, the manipulations did not influence the candidate competency/favorability ratings. Had this been found, it would have been a unique case of attitude generalization where manipulation on some character judgments would bleed over and affect another more general trait. Perhaps political competency is judged somewhat independently of the specific traits in our survey.

Conclusion
Our findings corroborate a recent large-scale analysis of survey data with answers from 140 000 people across over 60 countries [60]. The researchers found that people across the political spectrum were more similar than they were different on several moral and political attitudes. We share their conclusion that similarities between the attitudes of people and groups tend to be overlooked, suggesting that the "us versus them" dichotomy is a prevalent but perhaps exaggerated narrative. We hope our findings can be used to simulate polarizing societal forces and thus contribute to the search for an effective remedy sought by the political depolarization movements [2,[23][24][25]. Our study reveals that American voters at either end of the political spectrum are willing to endorse more open views about both candidates with surprisingly little intervention. Here, suggesting to people that they are more open-minded removed their political blinders and nudged them to consider and argue for more moderate views. These results offer hope in a divided political climate: even polarized people can become-at least momentarily-open to opposing views.