The definition of the impact factor

In the early 1960s, Irving H. Sher and Eugene Garfield created the journal impact factor (IF) to help select journals for the new Science Citation Index (SCI). They quickly recognised that small journals which specialise in certain topics may not be selected if dependent solely on total publications or citation counts. A simple method was then required for journals, regardless of size or citation frequency—the impact factor [1].

A journal’s IF is based on two elements: the numerator, which is the number of citations in the current year to any items published in a journal in the previous 2 years, and the denominator, which is the number of substantive articles (source items) published in the same 2 years. Experience has shown that in each specialty the best journals are those in which it is most difficult to have an article accepted, and these are the journals that have a high IF.

The use of the impact factor in research evaluation

Particularly in Germany, as in Austria and in Japan, the IF has developed significant influence beyond its original goal. In these countries, research assessment rests too heavily on the inflated status of the impact factor.

In its original sense, the IF represents a comparative measure of the quality of a journal. The IF simply reflects the ability of journals and editors to attract the best papers available [2]. Hence, the IF stands for the quality of the journal and not for the quality of the individual paper. In practical use in Germany, however, the IF of the journal is taken as a measure for the individual paper. Furthermore, the journal’s IF is attributed to each author of that specific article, irrespective of the number of authors and the contribution he or she made to this article. The individual citation rate of an author or article is totally neglected in this system.

Despite evident limitations, the IF is very influential. Although even Thomson Scientific acknowledges that the impact factor has grown beyond its control and is being used in many inappropriate ways, the impact factors of journals have been used to decide whether or not authors get promoted, are given tenure or are offered a position in a department, or are awarded a grant. In some countries, especially in Europe and Japan, government funding of entire institutions is dependent on the number of publications in journals with high impact factors. Finally, the sum of IF points of a person is considered as a measure of his individual research quality. This has obvious appeal for an academic administrator who knows neither the subject nor the journals [3]. Seglen points out that about 15% of the articles in a typical journal account for half of the citations gained by that publication. This means that an average paper in a journal with a high impact factor may not, in fact, be cited much more frequently than the average paper in a lower-ranking journal. Therefore the IF of the source journal should not be used as a substitute measure of the citation impact of individual articles in the journal [4].

Limitations of the impact factor

As the impact factor does not represent the scientific quality of an individual paper of the respective journal, the systematic counting of IF points for authors is inappropriate. Basically it is questionable whether a reference list of an article really reflects the significance of the papers that are cited. The reasons that lead a scientist to cite a paper or—even more importantly—bring him to omit a cite, will not be open to objective research [5].

Influences on and manipulation of the impact factor

Publishers find IF important, since libraries rely on them to make purchasing decisions. Editors find IF important, since highly ranking journals attract highly qualified authors, who in turn increase the IF through their contributions. It is well known that editors of many journals plan and implement strategies to massage their impact factors.

There are many ways to influence the impact factor of a journal. These mechanisms have gained considerable influence on the IF and can comprise a wide range between support and manipulation. The spectrum of “playing the impact factor game” [6] may include:

  • Editorial influence on self-citation. Self-citation is present when a paper submitted to a specific journal cites the same journal within a 2-year period in the reference list. The question in how far self-citation is an instrument for manipulation has recently attracted interest. Thomson Scientific found that the self-citation rate shows only a weak correlation to the impact and subject of a journal. In general, however, it seems that self-citation within a journal correlates inversely with the impact factor: journals with high IF (over 5.0) have low self-citation rates, and high self-citation rates are most common among journals with lower IF (below 0.5). There is also a weak correlation between self-citation rate and the size or specificity of the category assigned to a journal. Of the journals listed by Thomson Scientific, 82% have a self-citation rate at or below 20%. Graefe’s Archives for example has a self-citation rate of 7%. Smaller journals not publishing in English may have problems ttracting high ranking authors. A scientific journal, however, cannot exist without manuscript submissions. Consequently, editors of such a journal may be tempted to promote themselves.

    A relatively high self-citation rate can be due to several factors. It may arise from a novel or highly specific topic for which the journal provides a unique publication venue. A high self-citation rate may also result from the journal having a low total number of citations (low number in denominator) and few incoming citations from other journals. In journals with low numbers of total citations, a small change in the rate of self-citations can result in a large shift in its IF. It is also possible that self-citation derives from an editorial practice of the journal, resulting in a distorted view of the journal’s participation in the literature [7].

    Publications loaded with self-citations solely to increase the IF should not occur under adequate editorial management and are unacceptable for scientific societies as well as for editors, publishers and authors. Such publications particularly may have significant influence on the IF if the total rate of citations of that specific journal is low. Such a practice occurred recently when a journal’s IF boosted up 18 ranks caused by one paper containing 303 self-citations. Ranking of journals within a subspecialty hence can be significantly distorted, which may be harmful for other journals.

  • Author-derived influence on self-citation. A repeated citation of their own papers by the authors themselves may increase their personal IF and the journal’s IF in which they publish. In a more subtle way, a systematic use of cross citation within a few journals may boost each IF and is difficult to discover. These influences occur with more discretion but can also be called a rather questionable form of manipulating the IF.

  • A less obvious manipulation of the IF is to include more articles that do not “count” as source items, such as abstracts or “letters to the editor”. This strategy decreases the denominator, i.e. the number of citable papers in a journal and results in a higher IF.

  • Another strategy to increase the IF is to publish many review articles. Review articles that appear in review journals have high chances to be cited although they are not considered as original scientific work. It is easier for authors to cite one review than the dozens of studies that it summarizes. In fact, amongst the highest IF journals are review journals or those journals with a high number of review articles.

  • Timing of publication can also affect the IF. Considering the “sampling” period of 2 years, a good paper published in January has 11 months longer to attract citations than a paper published at the end of the year.

  • Even worse is the editorial influence on authors to remove citations of competing journals or remove citations from journals not publishing in English from the reference list.

    Obviously, most journals do not participate in these questionable practices. However, with the knowledge of the mechanisms involved in improving the IF, subspecialties such as ophthalmology with a limited number of journals not overlapping with other fields may be more vulnerable to these mechanisms.

    The Committee on Publication Ethics (COPE) considers artificial manipulation of the impact factor as unethical. Editors should be aware of the fine line between encouraging an improvement in the impact factor and artificially manipulating the figures [8].

    Therefore, the DOG (German Society of Ophthalmology) and the Editor and Editorial Board of Graefe’s Archives distance themselves from such unethical editorial practices.

  • There are more appropriate means to evaluate individual or group research output, such as the h-index. Recently Jorge Hirsch suggested an index reflecting the balance of personal publications and the number of citations per publication. Thus, someone with an h-index of 50 has written 50 papers that have each had at least 50 citations. Counting total papers, for example, could reward those with many mediocre publications, whereas just counting highest-ranked papers may not recognise a large and consistent body of work [9].

  • The “Institute of Science and Technology Studies” (IWT) in Bielefeld/Germany measures the visibility of the North Rhine-Westphalian faculties of medicine in the leading international scientific journals. In a first step, complete lists of publications (based on MEDLINE/PubMed and Web of Science) are prepared for each institutional unit (clinic, centre, institute, section, lab, research group) within the faculties. The lists are verified (and, where necessary, corrected and supplemented) inside these units by the scientists themselves—before any indicators are built. After verification the lists are integrated into a calibrated publication database for each faculty. Since every publication is exactly assigned to the relevant unit(s) during the verification process, valuable bibliometric indicators can be drawn afterwards from the database on various levels of institutional aggregation. A comprehensive analysis of publication output and impact (citation rates) is performed annually for the most recent (5-year) period. Actual citation counts are identified for each publication instead of using proxy measures such as short-term impact factors. Citation counts are normalised against the relevant expectation values on two “communication channel” levels: journal and field. This enables comparisons between institutional units despite their (possibly) different disciplinary profiles [10].

Improving the perception of a journal within the scientific community

The IF should be used as one measure when judging a journal. The print quality (pictures, figures) should also be taken into account. In addition the circulation, editorial standards, rapid and effective peer reviewing, short time lag between acceptance and print and visibility should be taken into consideration. The more people who know about and use the journal, the more likely they are to cite papers within it. Publishers use various methods: offering free trial access to the online journal, keen pricing models and including the journal in consortia agreements. CrossRef and Google collaborations are useful for raising the profile of the journal.

It is time to reconsider the whole process of accurately assessing an individual paper’s worth not only to scientists, but also to the wider community of readers. It surely makes more sense to measure the citations to individual articles rather than using a journal’s impact factor as a proxy measure [9]. In the future it will then become less an issue where a paper gets published, since individual articles are now being downloaded regardless of where they originally appeared.

With great pleasure we announce the co-operation with the International Society of Ocular Trauma (ISOT), and we are honoured that their president Prof. Ferenc Kuhn has chosen our journal as the society journal. We are also privileged that Prof. Kuhn has accepted our invitation to join Prof. Wong, Prof. Weinreb and Prof. Tano as International Co-Editor. We are looking forward to a fruitful joint co-operation with Prof. Kuhn and ISOT.

Graefe’s Archives is also the official publishing journal of the DOG (German Society of Ophthalmology) and of the Gonin Club.