Skip to main content

Advertisement

Log in

Qualitative Judgement of Research Impact: Domain Taxonomy as a Fundamental Framework for Judgement of the Quality of Research

  • Published:
Journal of Classification Aims and scope Submit manuscript

Abstract

The appeal of metric evaluation of research impact has attracted considerable interest in recent times. Although the public at large and administrative bodies are much interested in the idea, scientists and other researchers are much more cautious, insisting that metrics are but an auxiliary instrument to the qualitative peer-based judgement. The goal of this article is to propose availing of such a well positioned construct as domain taxonomy as a tool for directly assessing the scope and quality of research. We first show how taxonomies can be used to analyze the scope and perspectives of a set of research projects or papers. Then we proceed to define a research team or researcher’s rank by those nodes in the hierarchy that have been created or significantly transformed by the results of the researcher. An experimental test of the approach in the data analysis domain is described. Although the concept of taxonomy seems rather simplistic to describe all the richness of a research domain, its changes and use can be made transparent and subject to open discussions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • ABRAMO, G., CICERO, T., ANGELO, C.A. (2013), “National Peer-Review Research Assessment Exercises for the Hard Sciences Can Be a Complete Waste Of Money: The Italian Case”, Scientometrics, 95(1), 311–324.

    Article  Google Scholar 

  • ACM (2012), The 2012 ACM Computing Classification System, https://www.acm.org/publications/class-2012.

  • ALBERT, B. (2013), “Impact Factor Distortions”, Science, 340(6134), 787.

    Article  Google Scholar 

  • ARAGNÓN, A.M. (2013), “A Measure for the Impact of Research”, Scientific Reports, 3, Article number: 1649.

  • BERNERS-LEE, T. (2010), “Long Live the Web”, Scientific American, 303(6), 80–85.

    Article  Google Scholar 

  • BLEI, D.M., NG, A.Y., JORDAN, M.I., and LAFFERTY, J. (2003), “Latent Dirichlet Allocation”, Journal of Machine Learning Research, 3, 993–1022.

    MATH  Google Scholar 

  • CANAVAN, J., GILLEN, A., and SHAW, A. (2009), “Measuring Research Impact: Developing Practical and Cost-Effective Approaches”, Evidence and Policy: A Journal of Research, Debate and Practice, 5.2, 167–177.

    Article  Google Scholar 

  • DORA (2013). San Francisco Declaration on Research Assessment (DORA), http://www.ascb.org/files/SFDeclarationFINAL.pdf.

  • EISEN, J.A., MACCALLUM, C.J., and NEYLON, C. (2013), “Expert Failure: Re-Evaluating Research Assessment”, PLoS Biology, 11(10): e1001677.

    Article  Google Scholar 

  • ENGELS, T.C., GOOS, P., DEXTERS, N., and SPRUYT, E.H. (2013), “Group Size, h-Index, and Efficiency in Publishing in Top Journals Explain Expert Panel Assessments of Research Group Quality and Productivity”, Research Evaluation, 22(4), 224–236.

    Article  Google Scholar 

  • HALLANTIE, T. (2016), ”What It Takes to Succeed in FET-Open”, https://ec.europa.eu/digital-single-market/en/blog/what-it-takes-succeed-fet-open.

  • HICKS, D., WOUTERS, P., WALTMAN, L., DE RIJCKE, S., and RAFULS, I. (2015), “The Leiden Manifesto for Research Metrics”. Nature, 520, 429–431.

    Article  Google Scholar 

  • LEE, F.S., PHAM, X., and GU, G. (2013), “The UK Research Assessment Exercise and the Narrowing of UK Economics”, Cambridge Journal of Economics, 37(4), 693–717.

    Article  Google Scholar 

  • METRIC TIDE (2016), “The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management”, http://www.hefce.ac.uk/pubs/rereports/Year/2015/metrictide/Title,104463,en.html.

  • MIRKIN, B. (2013), “On the Notion of Research Impact and Its Measurement”, Institute of Control Problems, Moscow (in Russian), Control in Large Systems, Special Issue: Scientometry and Experts in Managing Science, 44, 292–307.

  • MIRKIN, B., and ORLOV, M. (2013), “Methods for Multicriteria Stratification and Experimental Comparison of Them”, Preprint (in Russian) WP7/2013/06, Higher School of Economics, Moscow, 31 pp.

  • MIRKIN, B., and ORLOV, M. (2015). “Three Aspects of the Research Impact by a Scientist: Measurement Methods and an Empirical Evaluation”, in Optimization, Control, and Applications in the Information Age, eds. A. Migdalas, and A. Karakitsiou, Springer Proceedings in Mathematics and Statistics, 130, pp. 233–260.

  • MURTAGH, F. (2008), “Editorial”, The Computer Journal, 51(6), 612–614.

    Article  Google Scholar 

  • MURTAGH, F. (2010), “The Correspondence Analysis Platform for Uncovering Deep Structure in Data and Information”, The Computer Journal, 53(3), 304–315.

    Article  Google Scholar 

  • NG, W.L. (2007), “A Simple Classifier for Multiple Criteria ABC Analysis”, European Journal of Operational Research, 177, 344–353.

    Article  MATH  Google Scholar 

  • ORLOV, M., and MIRKIN, B. (2014), “A Concept of Multicriteria Stratification: A Definition and Solution”, Procedia Computer Science, 31, 273–280.

    Article  Google Scholar 

  • OSTERLOH, M., and FREY, B.S. (2014), “Ranking Games”, Evaluation Review, Sage, pp. 1–28.

  • RAMANATHAN, R. (2006), “Inventory Classification with Multiple Criteria Using Weighted Linear Optimization”, Computers and Operations Research, 33, 695–700.

    Article  MATH  Google Scholar 

  • SCHAPIRE, R.E. (1990), “The Strength of Weak Learnability”, Machine Learning, 5(2), 197–227.

    Google Scholar 

  • SIDIROPOULOS, A., KATSAROS, D., and MANOLOPOULOS, Y. (2014), “Identification of Influential Scientists vs. Mass Producers by the Perfectionism Index”, Preprint, ArXiv:1409.6099v1, 27 pp.

  • SNOMED CT (2016), IHTSDO, International Health Terminology Standards Development Organization, SNOMEDCT, Systematized Nomenclature of Medicine, Clinical Terms, http://www.ihtsdo.org/snomed-ct.

  • SUN, Y., HAN, J., ZHAO, P., YIN, Z., CHENG, H., and WU, T. (2009), “RankClus: Integrating Clustering with Ranking for Heterogeneous Information Network Analysis”, EDBT ’09 Proceedings of the 12th International Conference on Extending Database Technology: Advances in Database Technology, New York: ACM, pp. 565–576.

  • THOMSON REUTERS (2016), “Thomson Reuters Intellectual Property and Science”, (Acquisition of the Thomson Reuters Intellectual Property and Science Business by Onex and Baring Asia Completed, Independent business becomes Clarivate Analytics), http://ip.thomsonreuters.com.

  • UNIVERSITY GUIDE (2016), “The Complete University League Guide”, http://www.thecompleteuniversityguide.co.uk/league-tables/methodology.

  • VAN RAAN, A.F. (2006). “Comparison of the Hirsch-index with Standard Bibliometric Indicators and with Peer Judgment for 147 Chemistry Research Groups”. Scientometrics, 67(3), 491–502.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fionn Murtagh.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Murtagh, F., Orlov, M. & Mirkin, B. Qualitative Judgement of Research Impact: Domain Taxonomy as a Fundamental Framework for Judgement of the Quality of Research. J Classif 35, 5–28 (2018). https://doi.org/10.1007/s00357-018-9247-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00357-018-9247-0

Keywords

Navigation