Abstract
We conducted a series of sender–receiver experiments to study the consequences of implementing a regime of blind proficiency tests in forensic science to reduce error rates and improve the criminal justice system. Senders are our surrogate for forensic laboratories and receivers, for the judge or jury. Our experimental surrogate (random audits with a penalty) for blind proficiency tests reduced sender error rates by as much as 46% depending on the level of experimentally induced bias. When penalties improve information quality, receiver error rates fell by as much as 26% depending on the level of the sender bias. We also find that the penalty must be large relative to the payoff to induce the reduction in errors. Our results suggest that a regime of blind proficiency testing has the potential to reduce forensic science errors.
Similar content being viewed by others
Notes
There are over one million felony convictions per year. Risinger (2007) has established that the “minimum factual wrongful conviction rate” for capital rape-murders from 1982 through 1989 is at least 3.3% (pp. 768 & 778). The study of Saks and Koehler (2005) suggests that about two thirds of false convictions arise in part from forensic science testing errors or false or misleading forensic science testimony. Multiplying these numbers gives you a number greater than 20,000.
They provide several estimates based on different assumptions. We refer only to their “Estimates Extrapolated from This Project.”
These separators were not used in treatments II and III.
We ran 10 sessions for the $10 penalty (Treatment III); however, we rejected the data from the first of our Treatment III sessions. Several participants were incumbents of the $5.00 sessions and seem to have given the instructions too little attention to notice the change in penalty size. Informal comments overheard as subjects were being paid seem to confirm this hypothesis. Our qualitative results do not change, however, if this session is included in our dataset.
Our test statistic is a non-parametric Chi-squared test. To accept the null hypothesis that the error rates between the base case and the alternative are the same, we compute a chi square statistic. The test statistic is \( {\chi^2} = \sum\limits_{i = 1}^n {\left( {\frac{{{{(O - E)}^2}}}{E}} \right)} \) where O are the actual observations and E are the expected observations. We test whether χ 2 < c, where c is a critical value with (m−1)×(k−1) degrees of freedom where m is the number of rows and k is the number of columns.
We thank Ryan Oprea for drawing our attention to this fact about our results.
In the low-bias condition, an inaccurate report will pay $0.50; whereas, if no audit is performed, the payout is $0.25. The difference is $0.25 for an inaccurate low-bias report by the sender. In the high-bias condition, an inaccurate report will pay $5.25 less than an accurate report if an audit is performed. The chance of an audit is 10%. The expected marginal value of sending an inaccurate report is thus 0.1 × (−$5.25) + 0.9 × ($0.25) = −$0.30. A similar calculation generates the +$0.15 value.
Except for a passing reference to racial prejudice (p. 13), their literature review does not consider internal reward mechanisms such as resentment and envy that may induce people to pay a cost in order to inflict malicious harm on innocent others. Unfortunately, some people will cut off their noses to spite their faces.
We have been told informally that precisely such a code of ethics is being prepared, which is a development we welcome. At the time of this writing, however, it has not yet been implemented by the AAFS.
We thank Norah Rudin for help on these issues, without implicating her in the substance of our remarks.
References
Akerlof, G. A., & Kranton, R. E. (2000). Identity and economics. The Quarterly Journal of Economics, 115(3), 715–753.
Akerlof, G. A., & Kranton, R. E. (2002). Identity and schooling: some lessons for the economics of education. Journal of Economic Literature, 40(4), 1167–1201.
Akerlof, G. A., & Kranton, R. E. (2005). Identity and the economics of organizations. The Journal of Economic Perspectives, 19(1), 9–32.
Akerlof, G. A., & Kranton, R. E. (2008). Identity, supervision, and work groups. The American Economic Review, 98(2), 212–217.
Akerlof, G. A., & Kranton, R. E. (2010). Identity Economics: How our Identities Shape Our Work, Wages, and Well-Being. Princeton and Oxford: Princeton University Press.
Bernstein, D. E. (1996). Junk science in the United States and the Commonwealth. Yale Journal of International Law, 21, 123–182.
Blume, A., DeJong, D., Kim, Y.-G., & Sprinkle, G. (1998). Experimental evidence on the evolution of meaning of messages in sender–receiver games. The American Economic Review, 88(5), 1323–1340.
Boettke, P. J., & Leeson, P. T. (2004). Liberalism, socialism, and robust political economy. Journal of Markets and Morality, 7(1), 99–111.
Browne, M. N., Williamson, C. L., & Barkacs, L. L. (2002). The perspectival nature of expert testimony in the United States, England, Korea & France. Connecticut Journal of International Law, 18, 55–102.
Budowle, Bruce, Maureen C. Bottrell, Stephen G. Bunch, Robert Fram, Diana Harrison, Stephen Meagher, Cary T. Oien, Peter E. Peterson, Danielle P. Seiger, Michael B. Smith, Melissa A. Smrz, Greg L. Soltis, and Robert B. Stacey. 2009. “A Perspective on Errors, Bias, and Interpretation in the Forensic Sciences and Direction for Continuing Advancement,” Journal of Forensic Sciences, forthcoming.
Connors, E., Lundregan, T., Miller, N., & McEwen, T. (1996). Convicted by Juries, Exonerated by Science: Case Studies in the Use of DNA Evidence to Establish Innocence After Trial. Washington: National Institute of Justice.
Crawford, V. P., & Sobel, J. (1982). Strategic information transmission. Econometrica, 50(6), 1431–1451.
Crime and Misconduct Commission (CMC). 2002. Forensics Under the Microscope: Challenges in Providing Forensic Science Services in Queensland.
CTV. 2005. “Justice minister quashes Driskell’s conviction,” 4 March 2005. Downloaded from http://www.ctv.ca/CTVNews/CTVNewsAt11/20050304/james_driskell_050303/ on 8 March 2005.
Dror, I. E., & Charlton, D. (2006). Why experts make errors. Journal of Forensic Identification, 56(4), 600–616.
Dror, I. E., & Rosenthal, R. (2008). Meta-analytically quantifying the reliability and biasability of forensic experts. Journal of Forensic Sciences, 53(4), 900–903.
Dror, I. E., Péron, A. E., Hind, S.-L., & Charlton, D. (2005). When emotions get the better of us: the effects of contextual top-down processing on matching fingerprints. Applied Cognitive Psychology, 19, 799–809.
Dror, I., Charlton, D., & Péron, A. E. (2006). Contextual information renders experts vulnerable to making erroneous identifications. Forensic Science International, 156, 74–78.
Garrett, B. L., & Neufeld, P. J. (2009). Invalid forensic science testimony and wrongful convictions. Virginia Law Review, 95(1), 1–97.
Giannelli, P. C. (1997). The abuse of evidence in criminal cases: the need for independent crime laboratories. Virginia Journal of Social Policy & the Law, 4, 439–478.
Giannelli, P. C. (2004). “Ake v. Oklahoma: the right to expert assistance in a post-daubert, post-DNA world.”. Cornell Law Review, 89(6), 1305–1419.
Green, Jerry R. and Nancy L. Stokey. 1980. “A Two-Person Game of Information Transmission,” Discussion paper 418, Center for Managerial Economics and Decision Sciences at Northwestern University.
Greer, S. (1994). Miscarriages of criminal justice reconsidered. The Modern Law Review, 57(1), 58–74.
Griffin, L. (2001). The correction of wrongful convictions: a comparative perspective. American University International Law Review, 16, 1241–1308.
Illinois, State of. 2002. Report of the Governor’s Commission on Capital Punishment, State of Illinois, April 15, 2002.
Innocence Project. 2009. “Facts on Post-Conviction DNA Exonerations,” downloaded 20 July 2009 from http://www.innocenceproject.org/Content/351.php.
Jonakait, R. N. (1991). Forensic science: the need for regulation. Harvard Journal of Law and Technology, 4, 109–191.
Jörg, Nico. 2006. Wrongfull convictions in the Netherlands and in Aruba, manuscript. (At the time the document was written, Jörg was the Advocate-General for Aruba at the Joint Court of Justice for the Dutch Antilles and Aruba and on furlough from the function of: Advocate-General at the Supreme Court of the Netherlands, The Hague. The title contains a spelling error.)
Justice Project, The. 2008. Improving the Practice and Use of Forensic Science A Policy Review.
Kaufman, Fred. 1998. Commission on Proceedings Involving Guy Paul Morin, Queen’s Printer for Ontario, 1998.
Koppl, R. (2005a). How to improve forensic science. European Journal of Law and Economics, 20(3), 255–286.
Koppl, R. (2005b). Epistemic systems. Episteme: Journal of Social Epistemology, 2(2), 91–106.
Koppl, R. (2007). “Diversity and forensics: diversity in hiring is not enough”. Medicine, Science and the Law, 47(2), 117–124.
Koppl, R. (2010a). The social construction of expertise. Society, 47(3), 220–226.
Koppl, Roger. (2010b). Romancing Forensics: Legal Failure in Forensic Science Administration, in: Edward Lopez (Ed) Government Failure in the Legal System: A Public Choice Review of the Law (Independent Institute).
Koppl, R., & James Cowan, E. (2010). A battle of forensic experts is not a race to the bottom. Forthcoming Spring: Review of Political Economy. 2010.
Koppl, R., Kurzban, R., & Kobilinsky, L. (2008). Epistemics for forensics. Epistmeme: Journal of Social Epistemology, 5(2), 141–159.
Krane, Dan. 2008. “Evaluating Forensic DNA Evidence,” Downloaded 20 July 2009 from http://www.bioforensics.com/downloads/KranePhiladelphia.ppt.
Krane, D., Ford, S., Gilder, J. R., Inman, K., Jamieson, A., Koppl, R., et al. (2008). Sequential unmasking: a means of minimizing observer effects in forensic DNA interpretation. Journal of Forensic Sciences, 53(4), 1006–1007.
Kurzban, R., Tooby, J., & Cosmides, L. (2001). Can race be erased? coalitional computation and social categorization. Proceedings of the National Academy of Science, 98(26), 15387–15392.
Laffont, J.-J., & Martimort, D. (2002). The Theory of Incentives. Princeton, NJ: Princeton University Press.
Levy, D., & Peart S. (2008a). “Sympathetic Bias.” Statistical Methods in Medical Research, forthcoming.
Levy, D., & Peart, S. (2008b). “Inducing greater transparency: towards the establishment of ethical rules for econometrics”. Eastern Economic Journal, 34(1), 103–114.
Locke, Mandy and Joseph Neff. 2010. “Witness for the prosecution: Lab loyal to law enforcement,” The News & Observer (of Raleigh, North Carolina), 18 August 2010. Downloaded 20 August 2010 from http://www.newsobserver.com/2010/08/12/625107/witness-for-the-prosecution-lab.html.
Mazar, N., Amir, O., & Ariely, D. (2005). “(Dis)Honesty: A Combination of Internal and External Rewards”, working paper. Massachusetts Institute of Technology: Sloan School of Management.
Mazar, N., & Ariely, D. (2006). Dishonesty in everyday life and its policy implications. Journal of Public Policy & Marketing, 25(1), 1–21.
McQuillan, Peter J. 2004. “Forensic Contretemps.” downloaded from http://www.innocenceproject.org/dnanews/index.php on 7 October 2004.
Milgrom, P., & Roberts, J. (1986). Relying on information of interested parties. The Rand Journal of Economics, 17, 18–32.
Miller, L. S. (1984). Bias among forensic document examiners: a need for procedural change. Journal of Police Science and Administration, 12, 407.
Miller, L. (1987). “Procedural bias in forensic science examinations of human hair”. Law and Human Behavior, 11(2), 157–163.
Mnookin, J. L. (2008). Of black boxes, instruments, and experts: testing the validity of forensic science. Epistmeme: Journal of Social Epistemology, 5(3), 343–358.
Nagin, D. S., & Pogarsky, G. (2003). An experimental investigation of deterrence: cheating, self-serving bias, and impulsivity. Criminology, 41(1), 501–527.
NAS (National Academy of Sciences). 2009. Strengthening Forensic Science in the United States: A Path Forward, National Academies Press, http://www.nap.edu/catalog.php?record_id=12589.
Peart, S. J., & Levy, D. M. (2005). The “Vanity of the Philosopher”: From Equality to Hierarchy in Post-Classical Economics. Ann Arbor: University of Michigan Press.
Peterson, J. L., Lin, G., Ho, M., Chen, Y., & Gaensslen, R. E. (2003a). “The feasibility of external blind DNA proficiency testing. I. background and findings”. Journal of Forensic Science, 48(1), 1–11.
Peterson, J. L., Lin, G., Ho, M., Chen, Y., & Gaensslen, R. E. (2003b). “The feasibility of external blind DNA proficiency testing. II. experience with actual blind tests”. Journal of Forensic Science, 48(1), 32–40.
Possley, Maurice and Ken Armstrong. 1999. “Lab Tech in Botched Case Promoted: Testimony Helped Wrongfully Convict Man of Rape,” Chicago Tribune, 19 March 1999. Downloaded 24 January 2008 from http://www.chicagotribune.com/.
Pyrek, K. M. (2007). Forensic Science Under Siege: The Challenges of Forensic Laboratories and the Medico-Legal Death Investigation System. Amsterdam: Academic Press.
Risinger, M. (2007). Innocents convicted: an empirically justified factual wrongful conviction rate. The Journal of Criminal Law and Criminology, 97(3), 761–806.
Risinger, M., Saks, M. J., Thompson, W. C., & Rosenthal, R. (2002). The Daubert/Kumho implications of observer effects in forensic science: hidden problems of expectation and suggestion. California Law Review, 90, 1–56.
Robotham, Julie and Geesche Jacobsen. 2009. “DNA lab error led to false conviction,” The Sunday Morning Herald, 3 October 2009. Downloaded 3 October 2009 from http://www.smh.com.au/national/dna-lab-error-led-to-false-conviction-20091002-ggj6.html.
Saks, M., & Koehler, J. J. (2005). The coming paradigm shift in forensic identification science. Science, 309, 892–895.
Saks, M. J., et al. (2001). Model prevention and remedy of erroneous convictions act. Arizona State Law Journal, 33, 665–718.
Schweitzer, M. E., & Hsee, C. K. (2002). Stretching the truth: elastic justification and motivated communication of uncertain information. Journal of Risk and Uncertainty, 25(2), 185–201.
Smith, A. (1759 [1976]). The Theory of Moral Sentiments, Indianapolis, Indiana: Liberty Classics.
Smith, V. L. (2003). Constructivist and ecological rationality in economics. The American Economic Review, 93, 465–508.
Spence, M. (1973). Job market signaling. The Quarterly Journal of Economics, 87(3), 355–374.
Squires, Nick. 2005. “Outback murder trial defence attacks ‘flawed’ DNA evidence,” Telegraph News, 7 December 2005. Downloaded from http://telegraph.co.uk 7 December 2005.
Thompson, W. C. (2009). “Painting the target around the matching profile: the Texas sharpshooter fallacy in forensic DNA interpretation. Law Probability and Risk, 8(3), 257–276.
Thompson, W. C., & Cole, S. A. (2007). Psychological aspects of forensic identification evidence. In M. Costanzo, D. Krauss, & K. Pezdek (Eds.), Expert psychological testimony for the courts (pp. 31–68). Mahwah, NJ: Erlbaum.
Thomson, M. A. (1974). Bias and quality control in forensic science: A cause for concern. Journal of Forensic Sciences, 19, 504–517.
Unattributed. 2003. “Delitto in pineta: Il killer ha lo stesso sangue dell’inglese?” Il Resto del Carlino 9 March 2003. Downloaded from http://www.ilrestodelcarlino.it 31 January 2005.
Unattributed. 2006. “Review of pathologist’s work could add to ranks of wrongly accused in 2006,” The Brandon Sun 1 January 2006. Downloaded 3 January 2006.
Unattributed. 2008. “Dr. Charles Smith: The man behind the public inquiry,” CBC News, 1 October 2008. Downloaded 7 October 2009 from http://www.cbc.ca/news/background/crime/smith-charles.html.
Whitman, G., & Koppl, R. (2009). “Rational bias in forensic science”. Law Probability and Risk, 9(1), 69–90.
Young, R., & Sanders, A. (1994). The royal commission on criminal justice: A confidence trick? Oxford Journal of Legal Studies, 14(3), 435–448.
Zabell, S. L. (2005). Fingerprint evidence. Journal of Law and Policy, 13, 143–179.
Author information
Authors and Affiliations
Corresponding author
Additional information
We thank the Earhart Foundation for generous financial support. We thank the NSF for an exploratory grant (#0622477) that established the viability of this line of experiments. For helpful comments and discussion, we thank Robert Kurzban, Norah Rudin, Benjamin Powell, Ryan Oprea, David Levy, Maria Minniti, John Schiemann, and an anonymous referee. We thank Shavonne Bailey and Diana Davino for excellent and dedicated research assistance. We thank Campus Provost Dr. Kenneth Greene and FLESS Director Professor John Schiemann for their help and energy in creating the Florham Laboratory for Experimental Social Science (FLESS), where we conducted our experimental sessions.
Appendices
Appendices
Appendix 1 and 2 are available at http://cms.fdu.edu/files/raefless.pdf.
Appendix 3 Data Tables
Rights and permissions
About this article
Cite this article
Cowan, E.J., Koppl, R. An experimental study of blind proficiency tests in forensic science. Rev Austrian Econ 24, 251–271 (2011). https://doi.org/10.1007/s11138-010-0130-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11138-010-0130-4