Abstract
This chapter introduces the Evaluators’ story by considering Impact evaluation as a dynamic social process akin to a tug-of-war. For this book, all debate about Impact assessment, including how to measure it, what it is and how to capture it, is put to the test within a peer review evaluation panel. By shifting the focus to one that considers the practice of evaluation, a totally different focus emerges of how to understand Impact, and whether peer review is an appropriate tool for Impact and similar evaluation objects. This chapter emphasises that the real value of Impact cannot be divorced from how evaluators play out their evaluation in practice, within a peer review panel group.
Critiquing peer view doesn’t always win friends among academic colleagues!
Personal correspondence sent to the author, July 2016
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
This term “beyond academia” is taken directly from the REF2014 Impact definition, which is described in more detail in Chap. 3.
- 2.
The Impact criterion has been conformed for REF2021 and will increase in value from 20% in REF2014, to 25% of the overall evaluation in REF2021.
- 3.
Throughout the text, where I use the evaluator’s voices, I denote each with a code. The structure of the codes used with the quotations from participants in the results follow a pattern. The first two digits refer to the panel to which the evaluator belonged. For example, P0 denotes the Main Panel and P1 sub-panel 1 (Clinical Medicine). In many cases an evaluator belonged to more than one panel and if so their multi-membership is shown through the first part of the code, that is, P1P2 means that the panellist was a member of both sub-panel 1 and sub-panel 2. The second part of the code represents the criterion that the evaluator assessed, being one of three possibilities. That is, “Out” where the evaluator only assessed the Outputs criterion; “OutImp” where the evaluator evaluated both the Outputs and Impact criteria; and “Imp” where the evaluator only assessed Impact. The next part of the code is an individual identification number, and the last part in brackets represents whether the quotation is taken from the pre-evaluation (PRE) or post-evaluation (POST) interviews.
References
Aubé, C., V. Rousseau, and S. Tremblay. 2011. Team size and quality of group experience: The more the merrier? Group Dynamics: Theory, Research, and Practice 15 (4): 357.
Baron, R.S. 2005. So right it’s wrong: Groupthink and the ubiquitous nature of polarised group decision making. Advances in Experimental Social Psychology 37: 219–253.
Bernardin, H.J., H. Hennessey, and J. Peyrefitte. 1995. Age, racial, and gender bias as a function criterion specificity: A test of expert testimony. Human Resource Management Review 5 (1): 63–77.
Bornmann, L., G. Wallon, and A. Ledin. 2008. Does the committee peer review select the best applicants for funding? An investigation of the selection process for two european molecular biology organization programmes. PLoSOne 3 (10): e3480.
Bourdieu, P. 1975. The specificity of the scientific field and the social conditions of the progress of reason. Information (International Social Science Council) 14 (6): 19–47.
Chubin, D.E. 1994. Grants peer review in theory and practice. Evaluation Review 18 (1): 20–30.
Chubin, D.E., and E.J. Hackett. 1990. Peerless science: Peer review and US science policy. Albany: State University of New York Press.
Comer, D.R. 1995. A model of social loafing in real work groups. Human Relations 48 (6): 647–667.
Cooper, J., K.A. Kelly, and K. Weaver. 2001. Attitudes, norms, and social groups. In Blackwell Handbook of social psychology: Group processes, ed. M.A. Hogg and R.S. Tindale, 259–282. Oxford: Blackwell.
Dahler-Larsen, P. 2007. Evaluation and public management. In The Oxford Handbook of public management, ed. E. Ferlie, L.E. Lynn Jr., and C. Pollitt. Oxford: Oxford University Press.
———. 2011. The evaluation society. Palo Alto, CA: Stanford University Press.
———. 2012. Constitutive effects as a social accomplishment: A qualitative study of the political in testing. Education Inquiry 3 (2): 171–186.
———. 2014. Constitutive effects of performance indicators: Getting beyond unintended consequences. Public Management Review 16 (7): 969–986.
Derrick, G.E., and G.N. Samuel. 2014. The impact evaluation scale: Group panel processes and outcomes in societal impact evaluation. Social Science and Medicine, in press.
Epley, N., and T. Gilovich. 2006. The anchoring-and-adjustment heuristic: Why the adjustments are insufficient. Psychological Science 17 (4): 311–318.
Epley, N., B. Keysar, L. Van Boven, and T. Gilovich. 2004. Perspective taking as egocentric anchoring and adjustment. Journal of Personality and Social Psychology 87 (3): 327.
Esser, J. 1998. Alive and well after 25 years: A review of groupthink research. Organizational Behavior and Human Decision Processes 73 (2/3): 116–141.
Faigman, D.L., J. Monahan, and C. Slobogin. 2014. Group to individual (G2i) inference in scientific expert testimony. The University of Chicago Law Review 81 (2): 417–480.
Gallo, S.A., J.H. Sullivan, and S.R. Glisson. 2016. The influence of peer reviewer expertise on the evaluation of research funding applications. PLoS One 11 (10): e0165147.
Hall, D., and S. Buzwell. 2013. The problem of free-riding in group projects: Looking beyond social loafing as reason for non-contribution. Active Learning in Higher Education 14 (1): 37–49.
Hemlin, S., and S.B. Rasmussen. 2006. The shift in academic quality control. Science, Technology, & Human Values 31 (2): 173–198.
Holbrook, J.B., and R. Frodeman. 2011. Peer review and the exante assessment of societal impacts. Research Evaluation 20 (3): 239–246.
Huutoniemi, K. 2012. Communicating and compromising on disciplinary expertise in the peer review of research proposals. Social Studies of Science 42 (6): 897–921.
Janis, I.L. 1982. Groupthink: Psychological studies of policy decisions and fiascoes. Boston, MA: Houghton Mifflin Company.
Kerr, N.L., R.J. MacCoun, and G.P. Kramer. 1996. Bias in judgement: Comparing individuals and groups. Psychological Review 103: 687–719.
Lamont, M. 2009. How professors think: Inside the curious world of academic judgement. Cambridge, MA: Harvard University Press.
Langfeldt, L. 2001. The decision-making constraints and processes of grant peer review, and their effects on the review outcome. Social Studies of Science 31 (6): 820–841.
———. 2006. The policy challenges of peer review: Managing bias, conflict of interests and multidisciplinary assessments. Research Evaluation 15 (1): 31–41.
Latane, B., K. Williams, and S. Harkins. 1979. Many hands make light the work: The causes and consequences of social loafing. Journal of Personality and Social Psychology 37 (6): 822–832.
Lee, C.J. 2012. A Kuhnian critique of psychometric research on peer review. Philosophy of Science 79 (5): 859–870.
Lee, C.J., C.R. Sugimoto, G. Zhang, and B. Cronin. 2013. Bias in peer review. Journal of the American Society for Information Science and Technology 64 (1): 2–17.
Levi, D. 2015. Group dynamics for teams. London: Sage Publications.
Luukkonen, T. 2012. Conservatism and risk-taking in peer review: Emerging ERC practices. Research Evaluation 21: 48–60.
Manville, C., S. Guthrie, M.-L. Henham, B. Garrod, S. Sousa, A. Kirtkey, S. Castle-Clarke, and T. Ling. 2015. Assessing impact submissions for REF2014: An evaluation. Cambridge: RAND Europe.
Merton, R.K. 1973. The sociology of science: Theoretical and empirical investigations. Chicago: University of Chicago press.
Porter, A.L., and F.A. Rossini. 1985. Peer review of interdisciplinary research proposals. Science, Technology, & Human Values 10 (3): 33–38.
Roumbanis, L. 2016. Academic judgments under uncertainty: A study of collective anchoring effects in Swedish Research Council panel groups. Social Studies of Science 47: 1–22.
Samuel, G.N., and G.E. Derrick. 2015. Societal impact evaluation: Exploring evaluator perceptions of the characterization of impact under the REF2014. Research Evaluation 24 (3): 229–241.
Simms, A., and T. Nichols. 2014. Social loafing: A review of the literature. Journal of Management Policy and Practice 15 (1): 58.
Taylor, J. 2011. The assessment of research quality in UK universities: Peer review or metrics? British Journal of Management 22 (2): 202–217.
Travis, G.D.L., and H.M. Collins. 1991. New light on old boys: Cognitive and institutional particularism in the peer review system. Science, Technology, & Human Values 16 (3): 322–341.
van Arensbergen, P., I. van der Weijden, and P. van den Besselaar. 2014. The selection of talent as a group process. A literature review on the social dynamics of decision making in grant panels. Research Evaluation 23 (4): 298–311.
Viner, N., P. Powell, and R. Green. 2004. Institutionalized biases in the award of research grants: A preliminary analysis revisiting the principle of accumulative advantage. Research Policy 33 (3): 443–454.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 The Author(s)
About this chapter
Cite this chapter
Derrick, G. (2018). Impact from the Evaluators’ Eye. In: The Evaluators’ Eye. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-319-63627-6_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-63627-6_1
Published:
Publisher Name: Palgrave Macmillan, Cham
Print ISBN: 978-3-319-63626-9
Online ISBN: 978-3-319-63627-6
eBook Packages: EducationEducation (R0)