Abstract
Can’t we all disagree more constructively? Recent years have seen a dramatic increase in political partisanship: the 2013 shutdown of the US government as well as an ever more divided political landscape in Europe illustrate that citizens and representatives of developed nations fundamentally disagree over virtually every significant issue of public policy, from immigration to health care, from the regulation of financial markets to climate change, from drug policies to medical procedures (Koleva et al. Journal of Research in Personality 46:184–194, 2012). The emerging field of political psychology brings the tools of moral psychology to bear on this issue. It suggests that the main conflict shaping politics today can be explained in terms of people’s moral foundations (Graham et al. Journal of Personality and Social Psychology 96(5):1029–1046, 2009; Haidt 2012; Graham et al. PLOS One 7(12):1–13, (2012); cf. also Rai and Fiske Psychological Review 118:57–75, 2011): progressive liberals, it is argued, view society as consisting of separate individuals with differing values and life plans, whereas conservatives rely on a thicker notion of political morality that includes traditions, communities, and values of purity (Haidt and Graham Social Justice Research 20:98–116, 2007). In this paper, I explore the normative implications of this theory. In particular, I will argue that its proponents take it to support an asymmetry of understanding: if deep political disagreements reflect differences in people’s moral foundations, and these disagreements cannot be rationally resolved, then overcoming them makes it necessary to acknowledge the moral foundations of the other side’s political outlook. But conservatives, the theory suggests, already do acknowledge all of the liberal moral foundations, and not vice versa. To overcome partisanship and the resulting political deadlock, then, it seems to be up to liberals to move closer towards the conservative side, and not vice versa. I wish to analyze what the argument for this asymmetry is and whether it holds up. In the end, I shall argue that the available evidence does support an asymmetry, but that it is the opposite of what Moral Foundations theorists think it is. There is such an asymmetry - but its burden falls on the conservative side.
Similar content being viewed by others
Notes
In his more recent work, Haidt [13] has added a sixth foundation to the list by breaking up the rights/fairness foundation into a liberty/oppression and a fairness/reciprocity foundation. My argument remains unaffected by this change.
This theory is most strongly associated with the work of Jonathan Haidt, but other authors make closely related claims, see, for example, Rai and Fiske [34].
The distinction between political liberals and conservatives has obviously been tailored to the US context. However, this issue seems to be largely terminological. The theory could easily be applied to other cultural environments, such as Europe, by replacing, with some audjustments, the liberal/conservative with a left-wing/right-wing distinction.
For the sake of brevity, I gloss over many of the more complex aspects of the theory. For the most comprehensive statement of the account to date, see Graham et al. (2012). In this paper, the authors explain many of the details regarding how moral foundations are psychologically implemented or what their evolutionary backstory is.
It should be obvious that this reconstruction of the argument sacrifices validity for readability. I hope that the main gist of the argument nevertheless becomes clear.
Haidt describes this mechanism (link 4) as follows: “Because people are highly attuned to the emergence of group norms, the model proposes that the mere fact that friends, allies, and acquaintances have made a moral judgment exerts a direct influence on others, even if no reasoned persuasion is used. Such social forces may elicit only outward conformity […], but in many cases people’s privately held judgments are directly shaped by the judgments of others […]” (2001, 819).
This last claim might seem controversial to some, especially those who have a less “rationalist” perspective on how politics does and should work. However, my argument remains unaffected by where exactly one stands with regard to this issue. The only thing one needs to agree with is that rational deliberation – that is, deliberation on the basis of sound empirical knowledge, publicly justifiable principles, and a willingness to change one’s mind in the light of better moral and/factual arguments on the side of one’s opponents.
It remains true, of course, that there is an is/ought gap. However, since ought implies can, we should be skeptical of moral prescriptions subjects who are equipped with a human psychology seem incapable of carrying out.
In the US context, people could be expected to pick up on the subtle racial information implicitly contained in the stereotypically black and white names.
The “good” argument could, for instance, consist in a brief sketch of an evolutionary explanation for the existence of our revulsion towards incest, together with the observation that this evolutionary rationale does not apply in this case. A complementary “bad argument” would point out that love is obviously a good thing, so that each act that could contribute to an increase in the amount of love would therefore have to be ok.
I happen to think that the evidence from dumbfounding is not just the most striking, but in fact the single most important piece of evidence for the anti-rationalist case Social Intuitionism is trying to make. In his landmark 2001-paper, Haidt identifies four main problems for rationalism in addition to the existence of dumbfounding (pp. 819–825): the intuitive basis of moral judgment, bias (the lawyer-metaphor), the post hoc nature of moral reasoning, and the emotional impact of moral beliefs. Without dumbfounding, none of these four tenets even come close to an interesting from of anti-rationalism about moral cognition. See also Haidt and Kesebir 2010, pp. 801–807 for this.
It is worth mentioning that in this original study, an average of 16 % of subjects did change their mind about the issue presented to them in response to the challenges put forward by the devil’s advocate.
Here is the full text of this famous vignette: “Julie and Mark are brother and sister. They are traveling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide not to do it again. They keep that night as a special secret, which makes them feel even closer to each other. What do you think about that, was it OK for them to make love? ([11], 2)”.
It would of course be easy to determine whether a similar position bias or the superior quality made a difference to subjects’ decision by teasing apart position and quality in a further variation of the experimental design. However, this possibility does not matter for my interpretation of the dumbfounding study.
References
Berker, S. 2009. The normative insignificance of neuroscience. Philosophy and Public Affairs 37(4): 293–329.
Foot, Ph. 1967. The problem of abortion and the doctrine of double effect. Oxford Review 5: 5–15.
Graham, J., J. Haidt, and B.A. Nosek. 2009. Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology 96(5): 1029–1046.
Graham, J., Nosek, B., Haidt, J. 2012. The moral stereotypes of liberals and conservatives: Exaggeration of differences across the political spectrum, PLoS One 7(12): 1–13.
Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S., & Ditto, P. H. 2015. Moral foundations theory: The pragmatic validity of moral pluralism. Advances in Experimental Social Psychology.
Graham, J., Nosek, B., and Haidt, J. 2015. The moral stereotypes of liberals and conservatives: Exaggeration across the political divide. PLoS One.
Greene, J.D., B.D. Sommerville, et al. 2001. An fMRI investigation of emotional engagement in moral judgment. Science 293: 2105–2108.
Greene, J. D. (2008). The secret joke of Kant’s soul. In: Moral Psychology. Vol. 3. The Neuroscience of Morality: Emotion, Brain Disorders, and Development. W. Sinnott-Armstrong (ed.). Cambridge, MA, MIT Press.
Greene, J.D. 2014. Beyond point-and-shoot morality. Why cognitive (Neuro)science matters for ethics. Ethics 124(4): 695–726.
Hall, L., Johansson, P., and Strandberg, T. 2012. Lifting the veil of morality: Choice blindness and attitude reversals on a self-transforming survey. PLoS ONE 7(9).
Haidt, J. 2001. The emotional dog and its rational tail. Psychological Review 108: 814–834.
Haidt, J. 2004. The emotional dog gets mistaken for a possum. Review of General Psychology 8(4): 283–290.
Haidt, J. 2012. The righteous mind. Why good people are divided by religion and politics. London: Penguin.
Haidt, J., S. Koller, et al. 1993. Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology 65: 613–628.
Haidt, J., F. Björklund, et al. 2000. Moral dumbfounding: When intuition finds no reason. Unpublished Manuscript, University of Virginia.
Haidt, J., and F. Bjorklund. 2008. Social intuitionists answer six questions about moral psychology. In Moral psychology. Vol. 2. The cognitive science of morality: intuition and diversity, ed. W. Sinnott-Armstrong, 181–217. Cambridge: MIT Press.
Haidt, J., and M. Hersh. 2001. Sexual morality: the cultures and emotions of conservatives and liberals. Journal of Applied Social Psychology 31: 191–221.
Haidt, J., and J. Graham. 2007. When morality opposes justice: conservatives have moral intuitions that liberals may not recognize. Social Justice Research 20: 98–116.
Haidt, J., & Kesebir, S. 2010. Morality. In S. Fiske, D. Gilbert, & G. Lindzey (Eds.) Handbook of social psychology, 5th Edition. Hobeken, NJ: Wiley. Pp. 797–832.
Heath, J. 2014. Enlightenment 2.0: Restoring sanity to our politics, our economy, and our lives, HarperCollins.
Henrich, J., S.J. Heine, and A. Norenzayan. 2010. The weirdest people in the world. Behavioral and Brain Sciences 33(2–3): 61–83.
Iyer, R., and Koleva, S. P., Graham, J., Ditto, P. H., and Haidt, J. 2012. Understanding libertarian morality: The psychological dispositions of self-identified libertarians. PLoS One.
Jacobson, D. 2012. Moral dumbfounding and moral stupefaction. Oxford Studies in Normative Ethics 2: 289–316.
Kennett, J., and C. Fine. 2009. Will the real moral judgment please stand up? Ethical Theory and Moral Practice 12(1): 77–96.
Knobe, J. 2010. Person as scientist, person as moralist. Behavioral and Brain Sciences 33(4): 315.
Koleva, S.P., J. Graham, P. Ditto, R. Iyer, and J. Haidt. 2012. Tracing the threads: how five moral concerns (especially Purity) help explain culture war attitudes. Journal of Research in Personality 46: 184–194.
Levy, N. 2007. Neuroethics, Challenges for the 21st century. Cambridge: Cambridge University Press.
Musschenga, B. 2013. The promises of moral foundations theory. Journal of Moral Education 42(3): 330–345.
Nisbett, R.E., and T.D. Wilson. 1977. Telling more than we can know: verbal reports on mental processes. Psychological Review 84(3): 231–259.
Nisbett, R.E., and T.D. Wilson. 1978. The accuracy of verbal reports about the effects of stimuli and behavior. Social Psychology 41(2): 118–131.
Paxton, J.M., L. Ungar, et al. 2011. Reflection and reasoning in moral judgment. Cognitive Science 36(1): 163–177.
Prinz, J. 2006. The emotional basis of moral judgment. Philosophical Explorations 9(1): 29–43.
Prinz, J. 2007. The emotional construction of morals. New York: Oxford University Press.
Rai, T.S., and A.P. Fiske. 2011. Moral psychology is relationship regulation: moral motives for unity, hierarchy, equality, and proportionality. Psychological Review 118: 57–75.
Railton, P. 2014. The affective dog and its rational tale: intuition and attunement. Ethics 124(4): 813–859.
Rossen, I., C. Lawrence, P. Dunlop, and S. Lewandowsky. 2014. Can moral foundations theory help to explain partisan differences in climate change beliefs? Paper presented at the annual meeting of the ISPP 36th Annual Scientific Meeting, Lauder School of Government, Diplomacy and Strategy. Israel: IDC–Herzliya, Herzliya.
Sauer, H. 2012. Educated Intuitions. Automaticity and rationality in moral judgement. Philosophical Explorations 15(3): 255–275.
Thomson, J.J. 1976. Killing, letting die, and the trolley problem. The Monist 59(2): 204–217.
Uhlmann, E.L., and G.L. Cohen. 2005. Constructed criteria: redefining merit to justify discrimination. Psychological Science 16: 474–480.
Uhlmann, E.L., D.A. Pizarro, et al. 2009. The motivated use of moral principles. Judgment and Decision Making 4(6): 476–491.
Acknowledgments
I would like to thank audiences in Groningen, Tilburg, and Rotterdam for helpful feedback on this paper. Special thanks go to Tom Bates, Daan Evers, Joshua Greene, Frank Hindriks, Dominik Klein, Pauline Kleingeld, Bert Musschenga, Jan Sprenger, Bruno Verbeek and two anonymous referees for Neuroethics.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Sauer, H. Can’t We All Disagree More Constructively? Moral Foundations, Moral Reasoning, and Political Disagreement. Neuroethics 8, 153–169 (2015). https://doi.org/10.1007/s12152-015-9235-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12152-015-9235-6