Skip to main content
Log in

Neophilia ranking of scientific journals

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

The ranking of scientific journals is important because of the signal it sends to scientists about what is considered most vital for scientific progress. Existing ranking systems focus on measuring the influence of a scientific paper (citations)—these rankings do not reward journals for publishing innovative work that builds on new ideas. We propose an alternative ranking based on the proclivity of journals to publish papers that build on new ideas, and we implement this ranking via a text-based analysis of all published biomedical papers dating back to 1946. In addition, we compare our neophilia ranking to citation-based (impact factor) rankings; this comparison shows that the two ranking approaches are distinct. Prior theoretical work suggests an active role for our neophilia index in science policy. Absent an explicit incentive to pursue novel science, scientists underinvest in innovative work because of a coordination problem: for work on a new idea to flourish, many scientists must decide to adopt it in their work. Rankings that are based purely on influence thus do not provide sufficient incentives for publishing innovative work. By contrast, adoption of the neophilia index as part of journal-ranking procedures by funding agencies and university administrators would provide an explicit incentive for journals to publish innovative work and thus help solve the coordination problem by increasing scientists’ incentives to pursue innovative work.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. See e.g. Abbott et al. (2010), Adam (2002), Chapron and Husté (2006), Egghe (2006), Engemann and Wall (2009), Frey and Katja (2010), Garfield (1972), Hirsch (2005), Moed (2008) Palacios-Huerta and Volij (2004, 2014), Tort et al. (2012) and Hutchins et al. (2015).

  2. A formal model of this coordination failure among scientists is provided by Besancenot and Vranceanu (2015). Using a Global Game model (e.g. Carlsson and van Damme 1993; Morris and Shin 2003; Sakovics and Steiner 2012), they show that when scientists’ beliefs about the usefulness of a new idea differ even a little, too little novel science takes place in equilibrium. In related empirical work, Foster et al. (2015) show that while successful novel research yields more citations than successful conventional research, the difference is not enough to compensate for the risk associated with pursuing innovative work.

  3. See, for example, “Meet one of the world’s most groundbreaking scientists. He’s 34.” STAT, 11/6/2015, https://www.statnews.com/2015/11/06/hollywood-inspired-scientist-rewrite-code-life/ (last retrieved 7/1/2016).

  4. An alternate way to measure the vintage of ideas on which a paper is built is by the vintage of the publications that the paper cites. The main disadvantage of this approach is that a citation is an ambiguous reference. Citations are sometimes signposts for a bundle of ideas that have appeared in a literature over a long period of time, rather than a pointer to a particular idea in a paper. Thus, it is problematic to infer that a paper builds on a novel idea simply because it cites recent papers. Additionally, a citation may instead reflect similarity in the aims of the citing and cited papers, rather than a citation to any particular idea. To the extent that this is the case, a high propensity to cite recent articles in a journal would merely be a reflection of publishing papers in areas with many similar papers rather than a reflection of the authors’ love of trying out new ideas. Citation-based indices are thus best viewed as measuring a journal’s influence—useful for some purposes—and complementary to the neophilia-based approach we outline in this paper.

  5. In UMLS, terms that are synonyms are mapped to one “concept ID”. There are 2 million concept IDs and 5 million terms. Thus, each UMLS term has approximately 1.5 synonyms on average. There are 449,783 UMLS terms in cohorts 1961–2013 that are at least once the newest term in a paper published during 1971–2013.

  6. A 20 % cutoff means the comparison set can be as small as 5 articles. A 1 % cutoff would mean that the comparison set can be as small as 100 articles. When there are fewer than 5 articles in a comparison group, which only occurs in our sensitivity analyses, we assign the top 20 % status to the article at the top of the “age of the newest term” ordering.

  7. In our baseline specification this time period is 1980–2013. We weight observations for each decade so that the total weight of observations for any given decade is the same as the total weight of observations is for any other decade.

  8. In each of these sensitivity analyses, we exclude from the analysis terms from some UMLS categories. However, because in some UMLS terms are appear in multiple categories, some terms that appear in the excluded categories will still be included in the analysis—provided they also appear in one or more of the still included categories.

  9. Professional coders with a biomedical degree affix the MeSH terms to each publication in MEDLINE.

  10. Two top 12 journals in the TR impact factor rankings are excluded from our analysis. Cochrane Database of Systematic Reviews is excluded because it does not publish sufficiently many original research articles—the focus of the journal is on reviews. Journal of Cachexia, Sarcopenia, and Muscle is excluded because MEDLINE does not have sufficient textual information on this journal. Accordingly, the 10 highly cited journals in Table 1 are among the top 12 most cited journals in the General and Internal Medicine category.

  11. The correlation coefficient between our primary neophilia index (column 1d of Table 2) and the N-gram based index (column 5 of Table 2) is 0.84. The correlation coefficient between our primary neophilia index and the variations reported in columns 3 and 4 of Table 2 range between 0.82 and 0.98.

  12. The estimate of the coefficient of interest is −0.005; a journal 10 places above another journal in the TR impact factor ranking publishes innovative articles 5 % more often (on average) than the lower-ranked journal. A related measure of this link is the correlation between the TR impact factor ranking and our neophilia index; for the journal set considered here this correlation is −0.49.

References

  • Abbott, A., Cyranoski, D., Jones, N., Maher, B., Schiermeier, Q., & Van Noorden, R. (2010). Metrics: Do metrics matter? Nature, 465, 860–862.

    Article  Google Scholar 

  • Adam, D. (2002). Citations: The counting house. Nature, 415, 726–729.

    Article  Google Scholar 

  • Alberts, B. (2013). Impact factor distortions. Science, 340, 787.

    Article  Google Scholar 

  • Besancenot, D., & Vranceanu, R. (2015). Fear of novelty: A model of scientific discovery with strategic uncertainty. Economic Inquiry, 53(2), 1132–1139.

    Article  Google Scholar 

  • Bird, S. B. (2008). Journal impact factors, h indices, and citation analyses in toxicology. Journal of Medical Toxicology, 4(4), 261–274.

    Article  Google Scholar 

  • Boudreau, K. J., Guinan, E. C., Lakhari, K. R., & Riedl, C. (2016). Looking across and looking beyond the knowledge frontier: Intellectual distance, novelty, and resource allocation in science. Management Science, 62(10), 2765–2783.

    Article  Google Scholar 

  • Brown, J. D. (2014). Citation searching for tenure and promotion: an overview of issues and tools. Reference Services Review, 42(1), 70–89.

    Article  Google Scholar 

  • Carlsson, H., & van Damme, E. (1993). Global games and equilibrium selection. Econometrica, 61(5), 989–1018.

    Article  MathSciNet  MATH  Google Scholar 

  • Chapron, G., & Husté, A. (2006). Open, fair, and free journal ranking for researchers. BioScience, 56(7), 558–559.

    Article  Google Scholar 

  • Chen, Y., Perl, Y., Geller, J., & Cimino, J. J. (2007). Analysis of a study of users, uses, and future agenda of the UMLS. Journal of the American Medical Informatics Association, 14(2), 221–231.

    Article  Google Scholar 

  • Egghe, L. (2006). Theory and practice of the g-index. Scientometrics, 69(1), 131–152.

    Article  MathSciNet  Google Scholar 

  • Engemann, K. M., & Wall, H. J. (2009). A journal ranking for the ambitious economist. Federal Reserve Bank of St. Louis Review, 91(3), 127–139.

    Google Scholar 

  • Fleming, L. (2001). Recombinant uncertainty in technological search. Management Science, 47(1), 117–132.

    Article  Google Scholar 

  • Foster, J. G., Rzhetsky, A., & Evans, J. A. (2015). Tradition and innovation in scientists’ research strategies. American Sociological Review, 80(5), 875–908.

    Article  Google Scholar 

  • Frey, B., & Katja, R. (2010). Do rankings reflect research quality? Journal of Applied Science, 13(1), 1–38.

    Google Scholar 

  • Garfield, E. (1972). Citation analysis as a tool in journal evaluation. Science, 178, 471–479.

    Article  Google Scholar 

  • Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Science, 102, 16569–16572.

    Article  Google Scholar 

  • Hutchins, B. I., Yuan, X., Anderson, J. M., & Santangelo, G. M. (2015). Relative Citation Ratio (RCR): A new metric that uses citation rates to measure influence at the article level; BioRxiv pre-print.

  • Katerattanakul, P., Razi, M. A., Han, B. T., & Kam, H.-J. (2005). Consistency and concern on IS journal rankings. Journal of Information Technology Theory and Application (JITTA), 7(2), 1–20.

    Google Scholar 

  • Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago: Chicago University Press.

    Google Scholar 

  • Lee, Y.-N., Walsh, J. P., & Wang, J. (2015). Creativity in scientific teams: Unpacking novelty and impact. Research Policy, 44(3), 684–697.

    Article  Google Scholar 

  • Marshall, A. (1920). Principles of economics (8th ed.). London: Macmillan and Co.

    Google Scholar 

  • Moed, H. F. (2008). UK research assessment exercises: Informed judgments on research quality or quantity? Scientometrics, 74(1), 153–161.

    Article  Google Scholar 

  • Morris, S., & Shin, H. S. (2003). Global games: Theory and applications. In M. Dewatripont, L. Hansen, & S. Turnovsky (Eds.), Advances in economics and econometrics. Cambridge: Cambridge University Press.

    Google Scholar 

  • Osterloh, M., & Frey, B. S. (2015). Ranking games. Evaluation Review, 32, 102–129.

    Article  Google Scholar 

  • Packalen, M., & Bhattacharya, J. (2015a). Age and the trying out of new ideas, NBER working paper no. 20920.

  • Packalen, M., & Bhattacharya, J. (2015b). New ideas in invention, NBER working paper no. 20922.

  • Packalen, M., & Bhattacharya, J. (2015c). Cities and ideas, NBER working paper no. 20921.

  • Palacios-Huerta, I., & Volij, O. (2004). The measurement of intellectual influence. Econometrica, 72(3), 963–977.

    Article  MATH  Google Scholar 

  • Palacios-Huerta, I., & Volij, O. (2014). Axiomatic measures of intellectual influence. International Journal of Industrial Organization, 34, 85–90.

    Article  Google Scholar 

  • Rzhetzky, A., Foster, J. G., Foster, I. T., & Evans, J. A. (2015). Choosing experiments to accelerate collective discovery. Proceedings of the National Academy of Sciences, 112(47), 14569–14574.

    Article  Google Scholar 

  • Sakovics, J., & Steiner, J. (2012). Who matters in coordination problems? American Economic Review, 102(7), 3439–3461.

    Article  Google Scholar 

  • Tort, A. B., Targino, Z. H., & Amaral, O. B. (2012). Rising publication delays inflate journal impact factors. PLoS ONE, 7(12), e53374.

    Article  Google Scholar 

  • Uzzi, B., Mukherjee, S., Stringer, M., & Jones, B. (2013). Atypical combinations and scientific impact. Science, 342(6157), 468–472.

    Article  Google Scholar 

  • Wang, J., Veugelers, R., & Stephan, P. (2015). Bias against novelty in science: A cautionary tale for users of bibliometric indicators; working paper.

  • Weingart, P. (2005). Impact of bibliometrics upon the science system: Inadvertent consequences? Scientometrics, 62(1), 117–131.

    Article  Google Scholar 

  • Xu, R., Musen, M. A., & Shah, N. (2010). A compehensive analysis of five million UMLS metathesaurus terms using eighteen million MEDLINE citations. In AMIA annual symposium proceedings, pp. 907–911.

Download references

Acknowledgments

We thank Bruce Weinberg, Vetla Torvik, Neil Smalheiser, Partha Bhattacharyya, Walter Schaeffer, Katy Borner, Robert Kaestner, Donna Ginther, Joel Blit and Joseph De Juan for comments. We also thank seminar participants at the University of Illinois at Chicago Institute of Government and Public Affairs, at the Research in Progress Seminar at Stanford Medical School, and at the National Bureau of Economic Research working group on Invention in an Aging Society for helpful feedback. Finally, we thank the National Institute of Aging for funding for this research through grant P01-AG039347. We are solely responsible for the content and errors in the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mikko Packalen.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Packalen, M., Bhattacharya, J. Neophilia ranking of scientific journals. Scientometrics 110, 43–64 (2017). https://doi.org/10.1007/s11192-016-2157-1

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-016-2157-1

Keywords

Navigation