Hostname: page-component-76fb5796d-vvkck Total loading time: 0 Render date: 2024-04-27T05:35:01.463Z Has data issue: false hasContentIssue false

Registration and Replication: A Comment

Published online by Cambridge University Press:  04 January 2017

Richard G. Anderson*
Affiliation:
Research Division, Federal Reserve Bank of St. Louis, St. Louis, Missouri, and Management School, University of Sheffield, Sheffield, UK e-mail: randerson@stls.frb.org

Extract

Social scientists long have debated how closely their research methods resemble the pure, classical ideal: sequentially (and absent pejorative “data snooping”) formulate a theory; develop empirically falsifiable hypotheses; collect data; and conduct the appropriate statistical tests. Meanwhile, in private quarters, they acknowledged that true research seldom proceeds in this fashion. Rather, in the event, the world is observed, data are collected, hypotheses formed, tests conducted, more data collected, and hypotheses revised (e.g., the classic Bernal 1974). Results are collected by the field's scientists into a body of knowledge that defines “known science” and sets the accepted boundaries for future research. Kuhn (1970) labeled this a paradigm. Occasionally, he argued, results appear that lie outside the bounds of the extant paradigm—then, innovation occurs.

The articles included in this issue's symposium discuss “registration” of empirical studies. The purpose is to reduce “publication bias,” that is, to prevent the scientific equivalent of schoolboy cheating: reporting tests of hypotheses that became evident only after the data were in hand. Registration has little power against the type of research fraud that is discovered from time to time. There are costs, however, when scientists operating within an accepted paradigm discourage researchers from exploring and reporting any/all relationships and correlations in a data set.

Type
Symposium on Research Registration
Copyright
Copyright © The Author 2013. Published by Oxford University Press on behalf of the Society for Political Methodology 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Anderson, Richard G. 2006. Replicability, real-time data, and the science of economic research: FRED, ALFRED, and VDC. Federal Reserve Bank of St. Louis Review 88(1): 8194.Google Scholar
Anderson, Richard G., and Dewald, William G. 1994. Replication and scientific standards in applied economics a decade after the Journal of Money, Credit, and Banking project. Federal Reserve Bank of St. Louis Review 76(6): 7983.Google Scholar
Anderson, Richard G., Greene, William H., McCullough, B. D., and Vinod, H. D. 2008. The role of data and program code archives in the future of economic research. Journal of Economic Methodology 15(1): 99119.CrossRefGoogle Scholar
Bernal, J. D. 1974. Science in history, 3rd ed. (1st ed., 1954). Cambridge, MA: MIT Press.Google Scholar
Dewald, William G., Thursby, Jerry G., and Anderson, Richard G. 1986. Replication in empirical economics. Journal of Money, Credit, and Banking project. American Economic Review 76(4): 587603.Google Scholar
King, Gary. 1995. Replication, replication. PS: Political Science and Politics 28(3): 119–25.Google Scholar
King, Gary. 2006. Publication, publication. PS: Political Science and Politics 39(1): 444–52.Google Scholar
Kuhn, Thomas S. 1970. The structure of scientific revolutions, 2nd ed. (1st ed., 1962). Chicago: University of Chicago Press.Google Scholar