The promise and pitfalls of cross-partisan conversations for reducing affective polarization: Evidence from randomized experiments

Organizations, activists, and scholars hope that conversations between outpartisans (supporters of opposing political parties) can reduce affective polarization (dislike of outpartisans) and bolster democratic accountability (e.g., support for democratic norms). We argue that such conversations can reduce affective polarization but that these effects are likely to be conditional on topic, being especially likely if the conversations topics avoid discussion of areas of disagreement; usually not persist long-term; and be circumscribed, not affecting attitudes toward democratic accountability. We support this argument with two unique experiments where we paired outpartisan strangers to discuss randomly assigned topics over video calls. In study 1, we found that conversations between outpartisans about their perfect day dramatically decreased affective polarization, although these impacts decayed long-term. Study 2 also included conversations focusing on disagreement (e.g., why each supports their own party), which had no effects. Both studies found little change in attitudes related to democratic accountability.


Excluded
Both partners said conversation did not begin (e.g., due to technical problems) and no audio of conversation beginning (or audio missing)

Included in analysis
Both partners reported conversation began and audio consistent (or audio missing and at least one reported conversation) Audio indicates conversation began

Excluded
Both partners said conversation did not begin (e.g., due to technical problems) and no audio of conversation beginning (or audio missing)

Included in analysis
Both partners reported conversation began and audio consistent (or audio missing and at least one reported conversation) Audio indicates conversation began Given general conversation instructions; If not in Placebo, informed partner is outpartisan

Excluded
Both partners in conversation room; room contains prompt Clicked link Did not click link  Notes: Points show the estimated differences between the effects of the Inparty Strengths and Outparty Flaws conditions in Study 2. Standard errors (thick lines) and 95% confidence intervals (thin lines) surround the point estimates. Adjusted p-values are adjusted across all non-primary outcomes using the procedure outlined in (30). As described in the text, p-values for primary outcomes are not adjusted. Notes: Note that because the data are repeated observations of the same individuals, the 95% confidence intervals shown in the Figure cannot be used to assess the strength of the statistical evidence for differences between time points or between conditions.  Table shows the composition of the sample at every step across selected covariates. Some covariates (e.g., education) are not available for the screener because these demographics were only asked about at the beginning of the conversation survey and were not asked on the screener. †: A joint test for differences between conditions on demographics in the table was just past the threshold for significance (p = 0.05) due to the difference in gender across the conditions discussed in the text. As discussed in the text, men were slightly over-represented in the Perfect Day condition, perhaps indicating that men were especially interested in having the Perfect Day conversation; however, gender is not a significant predictor of our outcomes and we find similar effect estimates for men and women on the items where we did find effects.
Removing gender, the test is no longer significant (p = 0.24). ‡: Similar to the data from the main conversation survey, we found slight differences between conditions on gender, although a joint test for differences between conditions fell just short of significance (p = 0.05). Removing gender, the test is no longer significant.                After the screener survey, we screened out prospective participants who were not interested in having a conversation or had a suspicious IP address; had bot-like free response questions (as determined by the first author); started the survey more than once (using their participant identification number and their IP address) or did not finish the survey; failed two or more attention checks and identified as a Democrat, or failed one attention check and identified as a Republican; identified as an Independent or did not provide their party identification; or identified as a Democrat and (during some periods when we had a surplus of Democratic participants) did not have a compatible system. Due to an error in the coding, for several waves, we mistakenly invited some Democrat participants who should not have qualified (e.g., because they failed an attention check).
We kept these participants.
Among qualifying participants, we assigned participants to time windows when they said they were available (and for those who were not available at any times, we assigned them a random time to give them the option of taking the study); due to an excess of interested Democratic participants, we randomly dropped Democrats when there was a large excess of them available at that time, prioritizing those who were free at that time and had a compatible system.
We sent participants a message through the recruiting platform indicating the day and time the full study would take place, and another reminder right before the study was to take place. (Several days usually elapsed between when participants completed the screener survey and their assigned day for the main conversation survey.)

B.2 Study 2
After the screener survey, we screened out those who were not interested in having a conversation or did not say they were available; identified as an Independent; were Democrats who failed an IP validation logic and failed the compatibility check; or, were Republicans and were not on their phones, had a duplicate IP address or email address, or had an invalid email address. Of these, we assigned participants to a time window they were available. We sent participants an email and text indicating the day and time the conversation survey would take place, and reminded them the day of.

C Relevant previous literature
Given space constraints in the main text, we previous previous research relevant to our study in more detail in this section. Our research builds on other research on the impacts of cross-partisan conversations in several important ways. Most notably, our study is the first to study face-to-face (i.e., video) conversations between outpartisans outside the context of a broader intervention; our study is the first to examine the long-run effects of such contact; and our study is the first to examine a broad variety of outcomes relevant to democratic accountability. Table S6 reviews existing studies that focus on conversations between outpartisans only, and not combined interventions that include both cross-partisan conversation and other intervention components. Beyond this paper, the only other study to do so is an important working paper by Rossiter, who studies text-based conversations between copartisan strangers (28). Our study builds on Rossiter's work in several ways: we measure effects on outcomes relevant to democratic accountability, we measure attitudinal polarization, and we measure long-run effects. We also examine the effects of conversations more explicitly about partisanship and-inspired by research about the differences between text-and video-based conversations (52)-which take place over video calls. Table S7 reviews existing studies of interventions that include both conversations with outpartisans and other components. These studies are extremely valuable studies of particular interventions, but cannot speak directly to the effects of cross-partisan conversations because they manipulate both cross-partisan contact and other interventions. A representative example is from Baron and colleagues, who study a day-long depolarization event run by the non-profit Better Angles that involves reflections on intergroup stereotypes, discussions of intergroup bias, a listening exercise, and action-oriented exercises (53). The pathbreaking study by Baron and colleagues is extremely valuable, although it is not focused on cross-partisan conversation per se. Cross-partisan conversation is only one component of the intervention the authors study; the results speak to the effects of the intervention but leave unclear the effects of just the cross-partisan conversation component. Table S7 study effects on outcomes relevant to democratic accountability, except for one somewhat outcome in Baron and colleagues' work, whether individuals donate to a depolarization NGO. Table S8 expands our scope to studies of vicarious, imagined, simulated, or reported contact between outpartisans. These are studies that pertain to the impact of cross-partisan conversations but do not randomly assign contact between real outpartisans. For example, (54) study the effects of imagined contact (Study 1) and text-based conversations with confederates (Studies 2 and 3); although (54) produce valuable insights about their research question (the effect of political inclusion in interpersonal conversations), it is unclear whether their findings would generalize to how individuals would react when conversing with an outpartisan who behaves as real people do, instead of how their confederates did. None of these studies examine long-run effects, effects on attitudinal polarization, nor effects on democratic accountability.

None of the studies in
Finally, Table S9 provides a non-exhaustive list of examples of other studies that examine effects of various interventions related to the topics of either conversations or affective polarization, but do not concern the effects of cross-partisan conversations on affective polarization.