Skip to main content
Log in

National research assessment exercises: the effects of changing the rules of the game during the game

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

National research evaluation exercises provide a comparative measure of research performance of the nation’s institutions, and as such represent a tool for stimulating research productivity, particularly if the results are used to inform selective funding by government. While a school of thought welcomes frequent changes in evaluation criteria in order to prevent the subjects evaluated from adopting opportunistic behaviors, it is evident that the “rules of the game” should above all be functional towards policy objectives, and therefore be known with adequate forewarning prior to the evaluation period. Otherwise, the risk is that policy-makers will find themselves faced by a dilemma: should they reward universities that responded best to the criteria in effect at the outset of the observation period or those that result as best according to rules that emerged during or after the observation period? This study verifies if and to what extent some universities are penalized instead of rewarded for good behavior, in pursuit of the objectives of the “known” rules of the game, by comparing the research performances of Italian universities for the period of the nation’s next evaluation exercise (2004–2008): first as measured according to criteria available at the outset of the period and next according to those announced at the end of the period.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. www.orp.researchvalue.it (last accessed on 2 Feb 2011).

  2. Civil engineering and architecture were not considered because the WoS listings are not sufficiently representative of research output in this area.

  3. A complete list is available on http://science.thomsonreuters.com/cgi-bin/jrnlst/jlsubcatg.cgi?PC=D. Last accessed on 2 Feb 2011.

  4. For publications in multi-category journals, AIR is calculated as the weighted average of the values for each subject category, with weighting equal to the average citation intensity in each single category.

  5. We applied a full counting method: each publication is fully counted for each participating university or UDA.

  6. In this simulation we assume that a scientist with no publications indexed in WoS has no other research outputs to submit.

References

  • Abramo, G., & D’Angelo, C. A. (2011). National-scale research performance assessment at the individual level. Scientometrics, 86(2), 347–364.

    Article  Google Scholar 

  • Abramo, G., D’Angelo, C. A., & Caprasecca, A. (2009). Allocative efficiency in public research funding: can bibliometrics help? Research Policy, 38(1), 206–215.

    Article  Google Scholar 

  • Abramo G., D’Angelo C. A., & Cicero T. (2011). The dispersion of research performance within and between universities as an indicator of the competitive intensity in higher education systems, working paper LabRTT, a short abstract available on http://www.disp.uniroma2.it/laboratorioRTT/TESTI/Working%20paper/RESPOL_Cicero.pdf.

  • Abramo, G., D’Angelo, C. A., & Pugini, F. (2008). The measurement of Italian universities’ research productivity by a non parametric–bibliometric methodology. Scientometrics, 76(2), 225–244.

    Article  Google Scholar 

  • Aksnes, D. W., & Taxt, R. E. (2004). Peers reviews and bibliometric indicators: a comparative study at Norwegian University. Research Evaluation, 13(1), 33–41.

    Article  Google Scholar 

  • Butler, L. (2003). Explaining Australia’s increased share of ISI publications. The effects of a funding formula based on publication counts. Research Policy, 32(1), 143–155.

    Article  Google Scholar 

  • D’Angelo, C. A., Giuffrida, C., & Abramo, G. (2011). A heuristic approach to author name disambiguation in large-scale bibliometric databases. Journal of the American Society for Information Science and Technology, 62(2), 257–269.

    Article  Google Scholar 

  • Debackere, K., & Glänzel, W. (2004). Using a bibliometric approach to support research policy making: The case of the Flemish BOF-key. Scientometrics, 59(2), 253–276.

    Article  Google Scholar 

  • Geuna, A., & Martin, B. R. (2003). University research evaluation and funding: an international comparison. Minerva, 41(4), 277–304.

    Article  Google Scholar 

  • Gläser, J. (2007). The social orders of research evaluation systems. In R. Whitley & J. Gläser (Eds.), The changing governance of sciences (pp. 245–264). Dordrecht: Springer.

  • Gómez, I., Bordons, M., Fernández, M. T., & Morillo, F. (2009). Structure and research performance of Spanish universities. Scientometrics, 79(1), 131–146.

    Article  Google Scholar 

  • Kao, C., & Pao, H. L. (2009). An evaluation of research performance in management of 168 Taiwan universities. Scientometrics, 78(2), 261–277.

    Article  Google Scholar 

  • Laudel, G. (2006). The art of getting funded: how scientists adapt to their funding conditions. Science and Public Policy, 33(7), 489–504.

    Article  Google Scholar 

  • Liefner, I. (2003). Funding, resource allocation, and performance in higher education systems. Higher Education, 46(4), 469–489.

    Article  Google Scholar 

  • Moed, H. F. (2008). UK research assessment exercises: Informed judgments on research quality or quantity? Scientometrics, 74(1), 153–161.

    Article  Google Scholar 

  • Moed, H. F. (2009). New developments in the use of citation analysis in research evaluation. Archivum Immunologiae et therapiae Experimentalis, 57(1), 13–18.

    Article  Google Scholar 

  • Oppenheim, C., & Norris, M. (2003). Citation counts and the research assessment exercise V: Archaeology and the 2001 RAE. Journal of Documentation, 56(6), 709–730.

    Google Scholar 

  • Pendlebury, D. A. (2009). The use and misuse of journal metrics and other citation indicators. Archivum Immunologiae et therapiae Experimentalis, 57(1), 1–11.

    Article  Google Scholar 

  • RAE. (2008). Research Assessment Exercise, www.rae.ac.uk. Accessed 2 Feb 2011.

  • REF. (2011). Research Excellence Framework. http://www.hefce.ac.uk/Research/ref/. Accessed 2 Feb 2011.

  • Rinia, E. J., van Leeuwen, Th. N., van Vuren, H. G., & Van Raan, A. F. J. (1998). Comparative analysis of a set of bibliometric indicators and central peer review criteria, Evaluation of condensed matter physics in the Netherlands. Research Policy, 27, 95–107.

    Article  Google Scholar 

  • Rousseau, R., & Smeyers, M. (2000). Output-financing at LUC. Scientometrics, 47(2), 379–387.

    Article  Google Scholar 

  • VQR. (2011). Linee guida VQR 20042008, http://www.civr.miur.it/vqr_decreto.html. Accessed 2 Feb 2011.

  • VTR (2006). Italian Triennial Research Evaluation. VTR 20012003. Risultati delle valutazioni dei Panel di Area. http://vtr2006.cineca.it/. Accessed 2 Feb 2011.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giovanni Abramo.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Abramo, G., D’Angelo, C.A. & Di Costa, F. National research assessment exercises: the effects of changing the rules of the game during the game. Scientometrics 88, 229–238 (2011). https://doi.org/10.1007/s11192-011-0373-2

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-011-0373-2

Keywords

Navigation