Skip to main content
Log in

So you want to run an experiment, now what? Some simple rules of thumb for optimal experimental design

Experimental Economics Aims and scope Submit manuscript

Abstract

Experimental economics represents a strong growth industry. In the past several decades the method has expanded beyond intellectual curiosity, now meriting consideration alongside the other more traditional empirical approaches used in economics. Accompanying this growth is an influx of new experimenters who are in need of straightforward direction to make their designs more powerful. This study provides several simple rules of thumb that researchers can apply to improve the efficiency of their experimental designs. We buttress these points by including empirical examples from the literature.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Atkinson, A. C., & Donev, A. N. (1992). Optimum experimental designs. Oxford: Clarendon Press.

    Google Scholar 

  • Berry, D. A. (2004). Bayesian statistics and the efficiency and ethics of clinical trials. Statistical Science, 19, 175–187.

    Article  Google Scholar 

  • Bloom, H. S. (2005). Randomizing Groups to Evaluate Place-Based Programs. In H. S. Bloom (Ed.), Learning more from social experiments: evolving analytic approaches. New York: Russell Sage Foundation.

    Google Scholar 

  • Blundell, R., & Costa Dias, M. (2002). Alternative approaches to evaluation in empirical microeconomics. Portuguese Economic Journal, 1, 91–115.

    Article  Google Scholar 

  • Camerer, C. F., & Hogarth, R. M. (1999). The effects of financial incentives in experiments: a review and capital-labor-production framework. Journal of Risk and Uncertainty, 19, 7–42.

    Article  Google Scholar 

  • Cochran, W. G., & Cox, G. M. (1950). Experimental designs. New York: Wiley.

    Google Scholar 

  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale: Erlbaum Associates.

    Google Scholar 

  • Donner, A., & Klar, N. (2000). Design and analysis of cluster randomization in health research. London: Holder Arnold.

    Google Scholar 

  • Duflo, E., Glennerster, R., & Kremer, M. (2007). Using randomization in development economics research: a toolkit (CEPR Discussion Paper No. 6059). Center for Economic Policy Research, London, England.

  • El-Gamal, M. A., & Palfrey, T. R. (1996). Economical experiments: Bayesian efficient experimental designs. International Journal of Game Theory, 25, 495–517.

    Article  Google Scholar 

  • El-Gamal, M. A., McKelvey, R. D., & Palfrey, T. R. (1993). A Bayesian sequential experimental study of learning in games. Journal of the American Statistical Association, 88, 428–435.

    Article  Google Scholar 

  • Fisher, R. A. (1935). The design of experiments. Edinburgh: Oliver and Boyd.

    Google Scholar 

  • Fleiss, J. L., Levin, B., & Paik, M. C. (2003). Statistical methods for rates and proportions. Hoboken: Wiley-Interscience.

    Book  Google Scholar 

  • Greenwald, A. G. (1976). Within-subjects design: to use or not to use. Psychological Bulletin, 83, 314–320.

    Article  Google Scholar 

  • Hahn, J., Hirano, K., & Karlan, D. (2011). Adaptive experimental design using the propensity score. Journal of Business and Economic Statistics, 29(1), 96–108.

    Article  Google Scholar 

  • Harrison, G., & List, J. A. (2004). Field experiments. Journal of Economic Literature, XLII, 1009–1055.

    Article  Google Scholar 

  • Harrison, G. W., Lau, M. I., & Rutström, E. E. (2009). Risk attitudes, randomization to treatment, and self-selection into experiments. Journal of Economic Behavior & Organization, 70, 498–507.

    Article  Google Scholar 

  • Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81, 945–960.

    Article  Google Scholar 

  • Kanninen, B. J. (2002). Optimal design for multinomial choice experiments. Journal of Marketing Research, 39(2), 14–227.

    Article  Google Scholar 

  • Karlan, D., & List, J. A. (2007). Does price matter in charitable giving? Evidence from a large-scale natural field experiment. The American Economic Review, 97, 1774–1793.

    Article  Google Scholar 

  • Keren, G. (1993). Between or within-subjects design: a methodological dilemma. In G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioral sciences: methodological issues. Hillsdale: Erlbaum Associates.

    Google Scholar 

  • Kish, L. (1965). Survey sampling. New York: Wiley.

    Google Scholar 

  • Kramer, M. S., & Shapiro, S. H. (1984). Scientific challenges in the application of randomized trials. Journal of the American Medical Association, 252(19), 2739–2745.

    Article  Google Scholar 

  • Lenth, R. V. (2001). Some practical guidelines for effective sample size determination. The American Statistician, 55, 187–193.

    Article  Google Scholar 

  • Lenth, R. V. (2006–2009). Java applets for power and sample size [Computer Software]. http://www.stat.uiowa.edu/rlenth/Power. Accessed February 2010.

  • Levitt, S. D., & List, J. A. (2009). Field experiments in economics: the past, the present, and the future. European Economic Review, 53(1), 1–18.

    Article  Google Scholar 

  • Levitt, S. D., & List, J. A. (2007). What do laboratory experiments measuring social preferences tell us about the real world? The Journal of Economic Perspectives, 21(2), 153–174.

    Article  Google Scholar 

  • List, J. A. (2001). Do explicit warnings eliminate the hypothetical bias in elicitation procedures? Evidence from field auctions for sportscards. The American Economic Review, 91(5), 1498–1507.

    Article  Google Scholar 

  • List, J. A. (2006). Field experiments: a bridge between lab and naturally occurring data. Advances in Economic Analysis & Policy, 6(2), 8.

    Article  Google Scholar 

  • Liu, X., Spybrook, J., Congden, R., Martinez, A., & Raudenbush, S. W. (2009). Optimal design for longitudinal and multilevel research v.2.0 [Computer Software]. Available at: http://www.wtgrantfoundation.org/resources/overview/research_tools.

  • Loomes, G. (2005). Modeling the stochastic component of behaviour in experiments: some issues for the interpretation of data. Experimental Economics, 8, 301–323.

    Article  Google Scholar 

  • Mead, R. (1988). The design of experiments: statistical principles for practical application. Cambridge: Cambridge University Press.

    Google Scholar 

  • McClelland, G. H. (1997). Optimal design in psychological research. Psychological Methods, 2(1), 3–19.

    Article  Google Scholar 

  • Raudenbush, S. W. (1997). Statistical analysis and optimal design for cluster randomized trials. Psychological Methods, 2(2), 173–185.

    Article  Google Scholar 

  • Raudenbush, S. W., Martinez, A., & Spybrook, J. (2007). Strategies for improving precision in group-randomized experiments. Educational Evaluation and Policy Analysis, 29(1), 5–29.

    Article  Google Scholar 

  • Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies of causal effects. Biometrika, 70, 41–55.

    Article  Google Scholar 

  • Rubin, D.B. (1978). Bayesian inference for causal effects: the role of randomization. Annals of Statistics, 6, 34–58.

    Article  Google Scholar 

  • Rutström, E. E., & Wilcox, N. T. (2009). Stated beliefs versus inferred beliefs: a methodological inquiry and experimental test. Games and Economic Behavior, 67, 616–632.

    Article  Google Scholar 

  • Schuirmann, D. J. (1987). A comparison of the two one-sided tests procedure and the power approach for assessing the equivalence of bioavailability. Journal of Pharmacokinetics and Biopharmaceutics, 15, 657–680.

    Article  Google Scholar 

  • Spybrook, J., Raudenbush, S. W., Congden, R., & Martinez, A. (2009). Optimal design for longitudinal and multilevel research: documentation for the optimal design software (Working Paper). Available at: http://www.wtgrantfoundation.org/resources/overview/research_tools.

  • StataCorp (2007). Stata statistical software: Release 10. College Station: StataCorp LP.

    Google Scholar 

  • Wilcox, R. R. (1996). Statistics for the social sciences. San Diego: Academic Press.

    Google Scholar 

  • Wilcox, N. (2008). Stochastic models for binary discrete choice under risk: a critical primer and econometric comparison. In J. C. Cox & G. W. Harrison (Eds.), Research in experimental economics: Vol. 12. Risk aversion in experiments. Bingley: Emerald.

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sally Sadoff.

Rights and permissions

Reprints and permissions

About this article

Cite this article

List, J.A., Sadoff, S. & Wagner, M. So you want to run an experiment, now what? Some simple rules of thumb for optimal experimental design. Exp Econ 14, 439–457 (2011). https://doi.org/10.1007/s10683-011-9275-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10683-011-9275-7

Keywords

JEL Classification

Navigation