Skip to main content
Log in

The objectivity of Subjective Bayesianism

  • Original paper in Philosophy of Probability
  • Published:
European Journal for Philosophy of Science Aims and scope Submit manuscript

Abstract

Subjective Bayesianism is a major school of uncertain reasoning and statistical inference. It is often criticized for a lack of objectivity: (i) it opens the door to the influence of values and biases, (ii) evidence judgments can vary substantially between scientists, (iii) it is not suited for informing policy decisions. My paper rebuts these concerns by connecting the debates on scientific objectivity and statistical method. First, I show that the above concerns arise equally for standard frequentist inference with null hypothesis significance tests (NHST). Second, the criticisms are based on specific senses of objectivity with unclear epistemic value. Third, I show that Subjective Bayesianism promotes other, epistemically relevant senses of scientific objectivity—most notably by increasing the transparency of scientific reasoning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. This means that p(X) = 0 ⇔ q(X) = 0; a property known as absolute continuity of probability measures. Notably, the convergence is uniform, that it, it holds simultaneously for all elements of the probability space.

  2. Sometimes, meta-analysis is supposed to fill this gap, e.g., failure to find significant evidence against the null in a series of experiments counts as evidence for the null. But first, this move does not provide a systematic, principled theory of statistical evidence, and second, it fails to answer the important question how data support the null hypothesis in a single experiment.

  3. Of course, the problem is more general: for both Bayesians and frequentists, the choice of a statistical test demands a lot of subjective judgment. Often, these choices are nontrivial even in simple problems, e.g., in deciding whether to analyze a contingency table with Fisher’s exact test, Pearson’s χ2-test or yet another method.

  4. The use of default priors in Bayesian inference raises a number of interesting philosophical questions (e.g., Sprenger 2012) which go beyond the scope of this paper. That said, for the given (Binomial) dataset, the chosen approach looks adequate.

References

  • Bem, D.J. (2011). Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100(3), 407–425.

    Article  Google Scholar 

  • Bem, D.J., Utts, J., Johnson, W.O. (2011). Must psychologists change the way they analyze their data? Journal of Personality and Social Psychology, 101(4), 716–719.

    Article  Google Scholar 

  • Benjamin, D.J. et al. (2017). Redefine statistical significance. Nature Human Behaviour, 1, 6–10.

    Google Scholar 

  • Bernardo, J.M. (1979). Reference Posterior Distributions for Bayesian Inference. Journal of the Royal Statistical Society. Series B (Methodological), 41, 113–147.

    Google Scholar 

  • Bernardo, J.M. (2012). Integrated objective Bayesian estimation and hypothesis testing (pp. 1–68). Oxford: (with discussion). Oxford University Press.

  • Blackwell, D., & Dubins, L. (1962). Merging of Opinions with Increasing Information. The Annals of Mathematical Statistics, 33(3), 882–886.

    Article  Google Scholar 

  • Bornstein, R. (1989). Exposure and affect: Overview and meta-analysis of research, 1968–1987. Psychological Bulletin, 106, 265–289.

    Article  Google Scholar 

  • Chase, W., & Brown, F. (2000). General Statistics. New York: Wiley.

    Google Scholar 

  • Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences. Newark/NJ: Lawrence & Erlbaum.

    Google Scholar 

  • Cohen, J. (1994). The Earth is Round (p<.05). Psychological Review, 49, 997–1001.

    Google Scholar 

  • Cox, D., & Mayo, D.G. (2010). Objectivity and Conditionality in Frequentist Inference. In Mayo, D. G., & Spanos, A. (Eds.) Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability and the Objectivity and Rationality of Science, chapter 2 (pp. 276–304). Cambridge: Cambridge University Press.

  • Crupi, V. (2013). Confirmation. In Zalta, E. (Ed.) The Stanford Encyclopedia of Philosophy.

  • Cumming, G. (2014). The New Statistics: Why and How. Psychological Science, 25, 7–29.

    Article  Google Scholar 

  • Douglas, H. (2000). Inductive Risk and Values in Science. Philosophy of Science, 67, 559–579.

    Article  Google Scholar 

  • Douglas, H. (2004). The irreducible complexity of objectivity. Synthese, 138 (3), 453–473.

    Article  Google Scholar 

  • Douglas, H. (2009). Science, Policy and the Value-Free Ideal. Pittsburgh: Pittsburgh University Press.

    Book  Google Scholar 

  • Earman, J. (1992). Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory. Cambridge/MA: MIT Press.

    Google Scholar 

  • Efron, B. (1986). Why isn’t everyone a Bayesian? (with discussion). American Statistician, 40, 1–11.

    Google Scholar 

  • Fisher, R.A. (1935). The design of experiments. Edinburgh: Oliver & Boyd.

    Google Scholar 

  • Fisher, R.A. (1956). Statistical methods and scientific inference. New York: Hafner.

    Google Scholar 

  • Fitelson, B. (2001). Studies in Bayesian Confirmation Theory. PhD thesis, University of Wisconsin–Madison.

  • Francis, G. (2012). Publication bias and the failure of replication in experimental psychology. Psychonomic Bulletin & Review, 19, 975–991.

    Article  Google Scholar 

  • Gaifman, H., & Snir, M. (1982). Probabilities Over Rich Languages, Testing and Randomness. The Journal of Symbolic Logic, 47(3), 495–548.

    Article  Google Scholar 

  • Galak, J., LeBoeuf, R.A., Nelson, L.D., Simmons, J.P. (2012). Correcting the Past: Failures to Replicate Ψ. Journal of Personality and Social Psychology, 103, 933–948.

    Article  Google Scholar 

  • Gallistel, C.R. (2009). The importance of proving the null. Psychological Review, 116, 439–453.

    Article  Google Scholar 

  • Gelman, A., & Hennig, C. (2018). Beyond objective and subjective in statistics. Journal of the Royal Statistical Society, Series A.

  • Gigerenzer, G. (2004). Mindless Statistics. Journal of Socio-Economics, 33, 587–606.

    Article  Google Scholar 

  • Goodman, S. (1999). Toward Evidence-Based Medical Statistics. 2: The Bayes factor. Annals of Internal Medicine, 130, 1005–1013.

    Article  Google Scholar 

  • Harding, S. (1991). Whose Science? Whose Knowledge? Thinking from Women’s Lives. Ithaca: Cornell University Press.

    Google Scholar 

  • Hempel, C.G. (1965). Science and human values. In Aspects of Scientific Explanation. New York: The Free Press.

  • Howson, C., & Urbach, P. (2006). Scientific Reasoning: The Bayesian Approach, 3rd edn. Open Court: La Salle.

    Google Scholar 

  • Huber, P.J. (2009). Robust Statistics, 2nd edn. New York: Wiley.

    Book  Google Scholar 

  • Hyman, R., & Honorton, C. (1986). A joint communiqué: The psi ganzfeld controversy. Journal of Parapsychology, 50, 351–364.

    Google Scholar 

  • Ioannidis, J.P.A. (2005). Why most published research findings are false. PLoS Medicine, 2, e124.

    Article  Google Scholar 

  • Jaynes, E.T. (1968). Prior Probabilities. In IEEE Transactions on Systems Science and Cybernetics (SSC-4) (pp. 227–241).

  • Jaynes, E.T. (2003). Probability Theory: The Logic of Science. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Jeffreys, H. (1961). Theory of Probability, 3rd edn. Oxford: Oxford University Press.

    Google Scholar 

  • Kass, R.E., & Raftery, A.E. (1995). Bayes Factors. Journal of the American Statistical Association, 90, 773–795.

    Article  Google Scholar 

  • Lacey, H. (1999). Is Science Value Free? Values and Scientific Understanding. London: Routledge.

    Google Scholar 

  • Lee, M.D., & Wagenmakers, E.-J. (2013). Bayesian Cognitive Modeling: A Practical Course. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Lehrer, K., & Wagner, C. (1981). Rational Consensus in Science and Society. Dordrecht: Reidel.

    Book  Google Scholar 

  • Levi, I. (1960). Must the Scientist Make Value Judgments? Journal of Philosophy, 11, 345–357.

    Article  Google Scholar 

  • Longino, H. (1990). Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton University Press: Princeton.

    Google Scholar 

  • Machery, E. (2012). Power and Negative Results. Philosophy of Science, 79, 808–820.

    Article  Google Scholar 

  • Mayo, D.G. (1996). Error and the growth of experimental knowledge. Chicago: University of Chicago Press.

    Book  Google Scholar 

  • Mayo, D.G., & Spanos, A. (2006). Severe Testing as a Basic Concept in a Neyman-Pearson Philosophy of Induction. British Journal for the Philosophy of Science, 57, 323–357.

    Article  Google Scholar 

  • McMullin, E. (1982). Values in Science. In Proceedings of the Biennal Meeting of the PSA (pp. 3–28).

  • Megill, A. (Ed.). (1994). Rethinking Objectivity. Durham & London: Duke University Press.

  • Morey, R.D., Rouder, J.N., Verhagen, J., Wagenmakers, E.-J. (2014). Why hypothesis tests are essential for psychological science: a comment on Cumming (2014). Psychological Science, 25(6), 1289–1290.

    Article  Google Scholar 

  • Moyé, L.A. (2008). Bayesians in clinical trials: Asleep at the switch. Statistics in Medicine, 27, 469–482.

    Article  Google Scholar 

  • Neyman, J., & Pearson, E.S. (1967). Joint Statistical Papers. Berkeley/CA: University of California Press.

    Google Scholar 

  • Popper, K.R. (2002). The Logic of Scientific Discovery. Routledge: Reprint of the revised English 1959 edition. Originally published in German in 1934 as “Logik der Forschung”.

    Google Scholar 

  • Porter, T. (1996). Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton: Princeton University Press.

    Book  Google Scholar 

  • Quine, W.V.O. (1992). Pursuit of Truth. Cambridge MA: Harvard University Press.

    Google Scholar 

  • Reiss, J., & Sprenger, J. (2014). Scientific Objectivity. In The Stanford Encyclopedia of Philosophy.

  • Richard, F.D., Bond, C.F.J., Stokes-Zoota, J.J. (2003). One hundred years of social psychology quantitatively described. Review of General Psychology, 7, 331–363.

    Article  Google Scholar 

  • Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3), 638–641.

    Article  Google Scholar 

  • Rouder, J.N., Speckman, P.L., Sun, D., Morey, R.D., Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16(2), 225–237.

    Article  Google Scholar 

  • Rouder, J.N. (2011). Morey A Bayes factor meta-analysis of Bem’s ESP claim. Psychonomic Bulletin & Review, 18, 682–689.

    Article  Google Scholar 

  • Royall, R. (1997). Statistical Evidence: A Likelihood Paradigm. London: Chapman & Hall.

    Google Scholar 

  • Rudner, R. (1953). The scientist qua scientist makes value judgments. Philosophy of Science, 30, 1–6.

    Article  Google Scholar 

  • Senn, S. (2011). You may believe you are a Bayesian but you are probably wrong. Rationality, Markets and Morals, 2, 48–66.

    Google Scholar 

  • Smith, A. (1986). Why isn’t everyone a Bayesian? Comment. American Statistician, 40, 10.

    Google Scholar 

  • Sprenger, J. (2012). The Renegade Subjectivist: Josė Bernardo’s Reference Bayesianism. Rationality, Markets and Morals, 3, 1–13.

    Google Scholar 

  • Sprenger, J. (2018). Two impossibility results for measures of corroboration. British Journal for the Philosophy of Science, 69, 139–159.

    Google Scholar 

  • Staley, K. (2012). Strategies for securing evidence through model criticism. European Journal for Philosophy of Science, 2, 21–43.

    Article  Google Scholar 

  • Storm, L., Tressoldi, P., Di Risio, L. (2010). Meta-analysis of free response studies 1992–2008: Assessing the noise reduction model in parapsychology. Psychological Bulletin, 136, 471–485.

    Article  Google Scholar 

  • US Food and Drug Administration. (2010). Guidance for the Use of Bayesian Statistics in Medical Device Clinical Trials. Available at https://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/ucm071072.htm.

  • Utts, J. (1991). Replication and Meta-Analysis in Parapsychology. Statistical Science, 6, 363–403. (with discussion).

    Article  Google Scholar 

  • Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H.L.J. (2011a). Why Psychologists Must Change the Way They Analyze Their Data: The Case of Psi: Comment on Bem (2011). Journal of Personality and Social Psychology, 100(3), 426–432.

    Article  Google Scholar 

  • Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H.L.J. (2011b). Yes, Psychologists Must Change the Way They Analyze Their Data: Clarifications for Bem, Utts and Johnson. Available at https://www.ejwagenmakers.com/papers.html.

  • Wasserman, L. (2004). All of Statistics. New York: Springer.

    Book  Google Scholar 

  • Wetzels, R., Raaijmakers, J.G.W., Jakab, E., Wagenmakers, E.-J. (2009). How to quantify support for and against the null hypothesis: a flexible WinBUGS implementation of a default Bayesian t test. Psychonomic Bulletin & Review, 16, 752–760.

    Article  Google Scholar 

  • Wetzels, R., & Wagenmakers, E.-J. (2012). A default Bayesian hypothesis test for correlations and partial correlations. Psychonomic Bulletin & Review, 19, 1057–1064.

    Article  Google Scholar 

  • Williamson, J. (2010). In Defence of Objective Bayesianism. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Ziliak, S., & McCloskey, D. (2008). The cult of statistical significance: How the standard error costs us jobs, justice and lives. Ann Arbor: University of Michigan Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jan Sprenger.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sprenger, J. The objectivity of Subjective Bayesianism. Euro Jnl Phil Sci 8, 539–558 (2018). https://doi.org/10.1007/s13194-018-0200-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13194-018-0200-1

Keywords

Navigation