Article Text

Download PDFPDF

Beyond the impact factor?
Free
  1. Seena Fazel1,
  2. Jelle Lamsma2
  1. 1Department of Psychiatry, University of Oxford, Oxford, UK;
  2. 2Department of Criminal Law and Criminology, VU University Amsterdam, Amsterdam, The Netherlands
  1. Correspondence to Seena Fazel, seena.fazel{at}psych.ox.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Over the past few years, increasing efforts have been made to evaluate research output using different markers of quality.1 This is most clearly reflected by the work of the Research Exercise Framework, a huge undertaking that was completed in 2014 and ranked subject areas in all UK universities according to the societal impact of research and the environment in which it was conducted, alongside the quality of research publications and other outputs. Still, in this Framework, 65% of the weighting was towards outputs and the approach used was a variant of peer review in which contributions were read and discussed by panel members.2 ,3 In 2015, the amount of allocated funding will be made proportionate to a university's ‘research power’, which is calculated by multiplying a weighted average score of the aforementioned quality criteria by the total number of full-time equivalent staff members working there.2 Similar considerations also apply to universities and individual researchers in relation to grant-funding, appointment and promotion.

A key determinant in most of these decisions is the impact factor of the journal in which research is published. Journal impact factors have historically been the preserve of one organisation, Thomas Reuters (formerly the Institute of Scientific Information or ISI), which publishes them every June or July in its Journal Citation Reports (JCR) for each journal that meets a basic set of rules, such as timeliness of publication and peer review of submissions.4 The Thomas Reuters Journal Impact Factor (JIF) measures the average frequency with which articles published in a certain journal have been cited in the preceding 2 years. It is calculated by dividing the number of current year citations to items published in a journal during the previous 2 years by the number of citable items published in the same journal and time period.5 However, in the past few years, other impact factors have been published that complement or may eventually become as important as the JIF. Google scholar, for example, takes a different approach and uses a variant of the h-index. It defines its metric, the h5-index, as ‘the largest number h, such that at least h articles in that publication were cited h times each’ within the past five complete calendar years.6 In addition, the publisher Elsevier has produced three novel journal metrics. The first, the Impact per Publication (IPP), is defined as “the ratio of citations in a year (Y) to scholarly papers published in the three previous years (Y-1, Y-2, Y-3) divided by the number of scholarly papers published in those same years (Y-1, Y-2, Y-3)”.7 The key difference to traditional approaches is that the same papers are used in the numerator and denominator, which is thought to provide a fair indication of the journal's impact as well as to decrease the impact factor's sensitivity to manipulation. The second, the Source Normalised Impact per Paper (SNIP), norms the IPP by research area, which reduces the differences between outperforming journals in fields with relatively low citation rates and those in high citation subject areas. The third, the SCImago Journal Rank (SJR), is conceptually similar to the IPP, but weights the citation rates differently depending on the rank of the citing journal. For example, if The Lancet cites a paper in a particular journal, this will count more towards a journal's impact factor than if the citation comes from a lower impact factor specialty journal.

To investigate the possible differences between the JIF and these new journal metrics, we ranked the top 30 journals in the clinical neurosciences (ie, psychology, psychiatry, neuroscience and general medicine) based on their JIF and compared their JIF ranking with one that was a composite score of their JIF, h5-index, IPP, SNIP and SJR rankings (figure 1). The latter rank may provide a more complete impression of a journal's impact factor than one that is based on a single metric. As the figure demonstrates, the journals’ rankings vary considerably across the two types of scores, except for some of the top ranking journals. However, some more clinically oriented journals improve their ranking using the composite score. This was the case for neurology (eg, Annals of Neurology) and psychiatry journals (eg, JAMA Psychiatry). The composite scores of the only two psychology journals (ie, Trends in Cognitive Sciences and Psychological Bulletin) were also higher than their JIF. Of course, changes over time are not factored into this approach and impact factor trends may also be a relevant consideration. In an interesting study comparing high-impact journals over time, in 1959 the three most highly cited journals were Medicine, American Journal of Medicine and British Medical Bulletin,8 which are no longer considered as the top general medical journals (eg, Medicine is now 630th in the JIF ranking). Nevertheless, in our new composite score, the rankings remain dominated by general medical journals, such as The Lancet, New England Journal of Medicine and JAMA and review journals in the neurosciences.

Figure 1

Rankings of Journal Impact Factor (JIF) (left) and by composite score (CS) of their JIF, h5-index Impact Per Publication (IPP), Source Normalised Impace per page (SNIP) and SCImago Jouranla rank (SJR) rankings right.

In the real world, will this information influence researchers and funders in deciding where to submit a particular paper? Possibly, although a recent study suggested that academics actually take into account two factors when making this decision, which can be modelled mathematically:9 the journal's prestige and impact factor on the one hand, and reducing the possible number of resubmissions or the total time in review on the other. It is suggested that the mean time in review is an important factor in this equation and information that more journals should report every year. Furthermore, the real world impact of each publication will also be determined by a range of other factors, including downloads and page views, media interest, social media activity (via Altmetrics10) and a particular paper's influence on clinical guidelines and practice. In our view, other important considerations for researchers are the extent to which a particular journal recommends that researchers put their reported findings in context, by a systematic or structured review of the evidence, adheres to reporting guidelines (eg, CONSORT, PRISMA, STARD, STROBE),11 insists on rigorous and transparent presentation of findings, allows for the possibility of postpublication comments,12 and its overall readability. A recent survey found that most publications have key elements that are missing, poorly reported or ambigious.11 For example, less than half of all trial studies puts their research in the context of a systematic review13 and 40% of diagnostic studies do not report the participant age and sex.11 In other words, researchers should support those journals that aim to increase value and reduce waste14 and consider a range of impacts, including different journal impact factors, when deciding on journal choice.

References

Footnotes

  • Competing interests None.