Skip to main content

Impact from the Evaluators’ Eye

  • Chapter
  • First Online:
The Evaluators’ Eye
  • 721 Accesses

Abstract

This chapter introduces the Evaluators’ story by considering Impact evaluation as a dynamic social process akin to a tug-of-war. For this book, all debate about Impact assessment, including how to measure it, what it is and how to capture it, is put to the test within a peer review evaluation panel. By shifting the focus to one that considers the practice of evaluation, a totally different focus emerges of how to understand Impact, and whether peer review is an appropriate tool for Impact and similar evaluation objects. This chapter emphasises that the real value of Impact cannot be divorced from how evaluators play out their evaluation in practice, within a peer review panel group.

Critiquing peer view doesn’t always win friends among academic colleagues!

Personal correspondence sent to the author, July 2016

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    This term “beyond academia” is taken directly from the REF2014 Impact definition, which is described in more detail in Chap. 3.

  2. 2.

    The Impact criterion has been conformed for REF2021 and will increase in value from 20% in REF2014, to 25% of the overall evaluation in REF2021.

  3. 3.

    Throughout the text, where I use the evaluator’s voices, I denote each with a code. The structure of the codes used with the quotations from participants in the results follow a pattern. The first two digits refer to the panel to which the evaluator belonged. For example, P0 denotes the Main Panel and P1 sub-panel 1 (Clinical Medicine). In many cases an evaluator belonged to more than one panel and if so their multi-membership is shown through the first part of the code, that is, P1P2 means that the panellist was a member of both sub-panel 1 and sub-panel 2. The second part of the code represents the criterion that the evaluator assessed, being one of three possibilities. That is, “Out” where the evaluator only assessed the Outputs criterion; “OutImp” where the evaluator evaluated both the Outputs and Impact criteria; and “Imp” where the evaluator only assessed Impact. The next part of the code is an individual identification number, and the last part in brackets represents whether the quotation is taken from the pre-evaluation (PRE) or post-evaluation (POST) interviews.

References

  • Aubé, C., V. Rousseau, and S. Tremblay. 2011. Team size and quality of group experience: The more the merrier? Group Dynamics: Theory, Research, and Practice 15 (4): 357.

    Article  Google Scholar 

  • Baron, R.S. 2005. So right it’s wrong: Groupthink and the ubiquitous nature of polarised group decision making. Advances in Experimental Social Psychology 37: 219–253.

    Article  Google Scholar 

  • Bernardin, H.J., H. Hennessey, and J. Peyrefitte. 1995. Age, racial, and gender bias as a function criterion specificity: A test of expert testimony. Human Resource Management Review 5 (1): 63–77.

    Article  Google Scholar 

  • Bornmann, L., G. Wallon, and A. Ledin. 2008. Does the committee peer review select the best applicants for funding? An investigation of the selection process for two european molecular biology organization programmes. PLoSOne 3 (10): e3480.

    Article  Google Scholar 

  • Bourdieu, P. 1975. The specificity of the scientific field and the social conditions of the progress of reason. Information (International Social Science Council) 14 (6): 19–47.

    Article  Google Scholar 

  • Chubin, D.E. 1994. Grants peer review in theory and practice. Evaluation Review 18 (1): 20–30.

    Article  Google Scholar 

  • Chubin, D.E., and E.J. Hackett. 1990. Peerless science: Peer review and US science policy. Albany: State University of New York Press.

    Google Scholar 

  • Comer, D.R. 1995. A model of social loafing in real work groups. Human Relations 48 (6): 647–667.

    Article  Google Scholar 

  • Cooper, J., K.A. Kelly, and K. Weaver. 2001. Attitudes, norms, and social groups. In Blackwell Handbook of social psychology: Group processes, ed. M.A. Hogg and R.S. Tindale, 259–282. Oxford: Blackwell.

    Google Scholar 

  • Dahler-Larsen, P. 2007. Evaluation and public management. In The Oxford Handbook of public management, ed. E. Ferlie, L.E. Lynn Jr., and C. Pollitt. Oxford: Oxford University Press.

    Google Scholar 

  • ———. 2011. The evaluation society. Palo Alto, CA: Stanford University Press.

    Book  Google Scholar 

  • ———. 2012. Constitutive effects as a social accomplishment: A qualitative study of the political in testing. Education Inquiry 3 (2): 171–186.

    Article  Google Scholar 

  • ———. 2014. Constitutive effects of performance indicators: Getting beyond unintended consequences. Public Management Review 16 (7): 969–986.

    Article  Google Scholar 

  • Derrick, G.E., and G.N. Samuel. 2014. The impact evaluation scale: Group panel processes and outcomes in societal impact evaluation. Social Science and Medicine, in press.

    Google Scholar 

  • Epley, N., and T. Gilovich. 2006. The anchoring-and-adjustment heuristic: Why the adjustments are insufficient. Psychological Science 17 (4): 311–318.

    Article  Google Scholar 

  • Epley, N., B. Keysar, L. Van Boven, and T. Gilovich. 2004. Perspective taking as egocentric anchoring and adjustment. Journal of Personality and Social Psychology 87 (3): 327.

    Article  Google Scholar 

  • Esser, J. 1998. Alive and well after 25 years: A review of groupthink research. Organizational Behavior and Human Decision Processes 73 (2/3): 116–141.

    Article  Google Scholar 

  • Faigman, D.L., J. Monahan, and C. Slobogin. 2014. Group to individual (G2i) inference in scientific expert testimony. The University of Chicago Law Review 81 (2): 417–480.

    Google Scholar 

  • Gallo, S.A., J.H. Sullivan, and S.R. Glisson. 2016. The influence of peer reviewer expertise on the evaluation of research funding applications. PLoS One 11 (10): e0165147.

    Article  Google Scholar 

  • Hall, D., and S. Buzwell. 2013. The problem of free-riding in group projects: Looking beyond social loafing as reason for non-contribution. Active Learning in Higher Education 14 (1): 37–49.

    Article  Google Scholar 

  • Hemlin, S., and S.B. Rasmussen. 2006. The shift in academic quality control. Science, Technology, & Human Values 31 (2): 173–198.

    Article  Google Scholar 

  • Holbrook, J.B., and R. Frodeman. 2011. Peer review and the exante assessment of societal impacts. Research Evaluation 20 (3): 239–246.

    Article  Google Scholar 

  • Huutoniemi, K. 2012. Communicating and compromising on disciplinary expertise in the peer review of research proposals. Social Studies of Science 42 (6): 897–921.

    Article  Google Scholar 

  • Janis, I.L. 1982. Groupthink: Psychological studies of policy decisions and fiascoes. Boston, MA: Houghton Mifflin Company.

    Google Scholar 

  • Kerr, N.L., R.J. MacCoun, and G.P. Kramer. 1996. Bias in judgement: Comparing individuals and groups. Psychological Review 103: 687–719.

    Article  Google Scholar 

  • Lamont, M. 2009. How professors think: Inside the curious world of academic judgement. Cambridge, MA: Harvard University Press.

    Book  Google Scholar 

  • Langfeldt, L. 2001. The decision-making constraints and processes of grant peer review, and their effects on the review outcome. Social Studies of Science 31 (6): 820–841.

    Article  Google Scholar 

  • ———. 2006. The policy challenges of peer review: Managing bias, conflict of interests and multidisciplinary assessments. Research Evaluation 15 (1): 31–41.

    Article  Google Scholar 

  • Latane, B., K. Williams, and S. Harkins. 1979. Many hands make light the work: The causes and consequences of social loafing. Journal of Personality and Social Psychology 37 (6): 822–832.

    Article  Google Scholar 

  • Lee, C.J. 2012. A Kuhnian critique of psychometric research on peer review. Philosophy of Science 79 (5): 859–870.

    Article  Google Scholar 

  • Lee, C.J., C.R. Sugimoto, G. Zhang, and B. Cronin. 2013. Bias in peer review. Journal of the American Society for Information Science and Technology 64 (1): 2–17.

    Article  Google Scholar 

  • Levi, D. 2015. Group dynamics for teams. London: Sage Publications.

    Google Scholar 

  • Luukkonen, T. 2012. Conservatism and risk-taking in peer review: Emerging ERC practices. Research Evaluation 21: 48–60.

    Article  Google Scholar 

  • Manville, C., S. Guthrie, M.-L. Henham, B. Garrod, S. Sousa, A. Kirtkey, S. Castle-Clarke, and T. Ling. 2015. Assessing impact submissions for REF2014: An evaluation. Cambridge: RAND Europe.

    Google Scholar 

  • Merton, R.K. 1973. The sociology of science: Theoretical and empirical investigations. Chicago: University of Chicago press.

    Google Scholar 

  • Porter, A.L., and F.A. Rossini. 1985. Peer review of interdisciplinary research proposals. Science, Technology, & Human Values 10 (3): 33–38.

    Article  Google Scholar 

  • Roumbanis, L. 2016. Academic judgments under uncertainty: A study of collective anchoring effects in Swedish Research Council panel groups. Social Studies of Science 47: 1–22.

    Google Scholar 

  • Samuel, G.N., and G.E. Derrick. 2015. Societal impact evaluation: Exploring evaluator perceptions of the characterization of impact under the REF2014. Research Evaluation 24 (3): 229–241.

    Article  Google Scholar 

  • Simms, A., and T. Nichols. 2014. Social loafing: A review of the literature. Journal of Management Policy and Practice 15 (1): 58.

    Google Scholar 

  • Taylor, J. 2011. The assessment of research quality in UK universities: Peer review or metrics? British Journal of Management 22 (2): 202–217.

    Article  Google Scholar 

  • Travis, G.D.L., and H.M. Collins. 1991. New light on old boys: Cognitive and institutional particularism in the peer review system. Science, Technology, & Human Values 16 (3): 322–341.

    Article  Google Scholar 

  • van Arensbergen, P., I. van der Weijden, and P. van den Besselaar. 2014. The selection of talent as a group process. A literature review on the social dynamics of decision making in grant panels. Research Evaluation 23 (4): 298–311.

    Article  Google Scholar 

  • Viner, N., P. Powell, and R. Green. 2004. Institutionalized biases in the award of research grants: A preliminary analysis revisiting the principle of accumulative advantage. Research Policy 33 (3): 443–454.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2018 The Author(s)

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Derrick, G. (2018). Impact from the Evaluators’ Eye. In: The Evaluators’ Eye. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-319-63627-6_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-63627-6_1

  • Published:

  • Publisher Name: Palgrave Macmillan, Cham

  • Print ISBN: 978-3-319-63626-9

  • Online ISBN: 978-3-319-63627-6

  • eBook Packages: EducationEducation (R0)

Publish with us

Policies and ethics