Regular articleDo subjective journal ratings represent whole journals or typical articles? Unweighted or weighted citation impact?
Introduction
Journal rating metrics—indicators of journal impact, prestige, reputation, utility, or perceived quality—can be readily classified into two types (Tahai & Meyer, 1999).1 Revealed preference metrics are those that represent actual behaviors such as publishing, indexing, and citing. The most common revealed preference metrics are citation metrics such as the h index, impact factor (IF), source normalized impact per paper (SNIP), eigenfactor (EF), article influence score (AI), and SCImago journal rank (SJR). In contrast, stated preference metrics—also known as subjective or reputational ratings—represent scholars’ opinions or hypothetical behaviors (e.g., “Which of these journals are most important to your work?” “Which carry the most weight in tenure and promotion decisions?”). Stated preference metrics are generally based on surveys of authors or faculty. They are most likely to be found in the social sciences and humanities, where the relationship between citation impact and perceived quality or reputation is not always straightforward. Moreover, stated preference metrics may better represent the opinions of scholars outside the “publish or perish” community—managers, policymakers, teachers, and industrial researchers, for instance (Bollen and Van de Sompel, 2008, Gorraiz and Gumpenberger, 2010, Schlögl and Gorraiz, 2010).
This paper uses multiple journal ratings in four disciplines—criminology and criminal justice, library and information science, public administration, and social work—to investigate two research questions:
- (1)
Are stated preference journal ratings more closely related to size-dependent citation metrics (those that represent the impact of the journal as a whole) or to size-independent citation metrics (those that represent the impact of a typical article)?2
- (2)
Are stated preference journal ratings more closely related to unweighted citation metrics (those that do not account for the impact of each citing journal) or to weighted citation metrics (those that do account for the impact of each citing journal)?
The first question relies on an important distinction. Size-dependent (whole journal) metrics such as total citations, EF, and the h index represent the number of citations accruing to all the articles in the journal. All else equal, a journal that publishes more articles will gain more citations, a higher EF, and a higher h index. In contrast, size-independent (typical article) metrics such as AI, CiteScore, IF, SJR, and SNIP divide total impact by the number of articles published and are therefore not influenced by journal size.3 For citation metrics, the distinction between size-dependent and size-independent indicators is clear. With stated preference ratings, however, the instructions to survey respondents seldom specify whether they ought to be evaluating entire journals or a typical article within each journal. Section 4 addresses this question—whether scholars (respondents) consider journal size when rating journals.
The second question is based on the distinction between unweighted metrics (which assign equal weight to each citation, regardless of the characteristics of the citing journal) and weighted metrics (which assign higher weights to citations that appear in more influential journals). Influence refers to citedness and, in the case of SCImago Journal Rank, network centrality. Although nearly 20 unweighted and weighted citation metrics are available from data download sites such as Journal Citation Reports (JCR), Eigenfactor, CWTS Journal Indicators, SCImago Journal & Country Rank, Scopus Journal Metrics, and Google Scholar Metrics, it is not obvious that either unweighted or weighted metrics are preferable as indicators of impact, prestige, reputation, or perceived quality. Section 5 presents one way of addressing this issue; it identifies the type of indicator, unweighted or weighted, that more closely coincides with the journal ratings assigned by scholars.
These research questions are important for at least two reasons. First, investigations such as this can help us understand the relationships between impact, reputation, prestige, and related constructs as they apply to journals. We can use established citation metrics as landmarks, comparing them with stated preference ratings in order to better understand what survey respondents mean when they rate journals. This kind of comparison is possible, however, only if we first address the questions presented here. Second, comparisons of multiple metrics can help us gauge the convergent validity of each one. Newer indicators such as SNIP and SJR are more likely to be accepted if we know they are correlated with other indicators of journal “quality,” especially when those other indicators use dissimilar methods to arrive at similar results (Cohn and Farrington, 2011, Martin, 1996, So, 1998, Weisheit and Regoli, 1984).
Section snippets
Previous research
Although many studies have investigated the correlations among citation metrics, fewer have examined the relationships between citation metrics and stated preference ratings. Two findings from the pre-2000 literature are especially notable:
- (1)
Stated preference ratings sometimes represent each journal's influence within a particular field or subfield rather than its more general scholarly impact. For instance, He and Pao (1986) discovered that the journal ratings assigned by scholars in the field
General methods
Four disciplines—criminology and criminal justice (CCJ), library and information science (LIS), public administration (PAD), and social work (SWK)—were selected for investigation. The social sciences were chosen because they are covered well in both citation databases and stated preference studies. Moreover, the chosen fields are those for which weighted and unweighted metrics might be expected to differ. In LIS, for instance, a distinction can be made between the top information science
Context
Many stated preference surveys ask about the prestige or reputation of journals. Others ask about their scholarly impact; their coverage of recent innovations in theory or methods; their usefulness for research, teaching, or practice; or their value as publication outlets for tenure and promotion. In still other cases, the key construct is never specified. As Weisheit and Regoli (1984) have noted, the lack of clear answers to the question “What is being measured?” is a major difficulty in the
Context
As noted in Section 1, unweighted citation metrics assign equal weight to each citation, regardless of the characteristics of the citing journal. In contrast, weighted metrics assign higher weights to citations that appear in more influential journals (González-Pereira, Guerrero-Bote, & de Moya-Anegón, 2010; Guerrero-Bote & de Moya-Anegón, 2012; West, Bergstrom, & Bergstrom, 2010; West et al., 2015).
The use of weighted metrics is based on two assumptions: first, that citations in high-impact
Conclusion
This paper presents modest evidence that stated preference journal ratings are more closely related to size-independent citation metrics (those that represent the impact of a typical article) than to size-dependent citation metrics (those that represent the impact of the journal as a whole). It also shows, more conclusively, that stated preference journal ratings are more closely related to weighted citation metrics (those that account for the impact of the citing journal) than to unweighted
Author contributions
Conceived and designed the analysis: William H. Walters.
Collected the data: William H. Walters.
Contributed data or analysis tools: William H. Walters.
Performed the analysis: William H. Walters.
Wrote the paper: William H. Walters.
Acknowledgements
I am grateful for the comments of Esther Isabelle Wilder and three anonymous referees. Support for this research was provided by Manhattan College, Menlo College, Harris Manchester College, and Oxford University.
References (94)
- et al.
The correlation between citation-based and expert-based assessments of publication channels: SNIP and SJR vs. Norwegian quality assessments
Journal of Informetrics
(2014) - et al.
The development of a ranking tool for refereed journals in which nursing and midwifery researchers publish their work
Nurse Education Today
(2010) - et al.
Popular and/or prestigious? Measures of scholarly esteem
Information Processing & Management
(2011) - et al.
A multi-method evaluation of journals in the decision and management sciences by US academics
Omega
(2000) - et al.
Why economists rank their journals the way they do
Journal of Economics and Business
(1991) The difference between popularity and prestige in the sciences and in the social sciences: A bibliometric analysis
Journal of Informetrics
(2010)- et al.
A new approach to the metric of journals’ scientific prestige: The SJR indicator
Journal of Informetrics
(2010) - et al.
A further step forward in measuring journals’ scientific prestige: The SJR2 indicator
Journal of Informetrics
(2012) - et al.
A comprehensive examination of the relation of three citation-based journal metrics to expert judgment of journal quality
Journal of Informetrics
(2016) - et al.
A discipline-specific journal selection algorithm
Information Processing & Management
(1986)
Measuring contextual citation impact of scientific journals
Journal of Informetrics
Measuring the impact of accounting journals using Google Scholar and the g-index
British Accounting Review
Expert-based versus citation-based ranking of scholarly and scientific publication channels
Journal of Informetrics
What's familiar is excellent: The impact of exposure effect on perceived journal quality
Journal of Informetrics
Comparing the expert survey and citation impact journal ranking methods: Example from the field of artificial intelligence
Journal of Informetrics
Assessing sport management journals: A multi-dimensional examination
Sport Management Review
An assessment of the relative impact of criminal justice and criminology journals
Journal of Criminal Justice
Citation impact analysis of top ranked computer science journals and their rankings
Journal of Informetrics
A group decision support approach to evaluating journals
Information & Management
Ranking forestry journals using the h-index
Journal of Informetrics
Some modifications to the SNIP journal impact indicator
Journal of Informetrics
LIS journals scientific impact and subject categorization: A comparison between Web of Science and Scopus
Scientometrics
The limits of bibliometrics for the analysis of the social sciences and humanities literature
A subjective and objective assessment of journals in educational psychology
Psychology and Education
Usage impact factor: The effects of sample characteristics on usage-based impact metrics
Journal of the American Society for Information Science and Technology
Journal status
Scientometrics
A principal component analysis of 39 scientific impact measures
PLoS ONE
Journal rankings: Comparing reputation, citation and acceptance rates
International Journal of Information Systems in the Service Sector
Should we use the mean citations per paper to summarise a journal's impact or to rank journals in the same field?
Scientometrics
CWTS journal indicators
Accrediting knowledge: Journal stature and citation impact in social science
Social Science Quarterly
Journal citation reports
Scholarly influence and prestige in criminology and criminal justice
Journal of Criminal Justice Education
Public administration as an academic discipline: Trends and changes in the COCOPS academic survey of European public administration scholars
Cocor: A comprehensive solution for the statistical comparison of correlations
PLoS ONE
Cocor: Comparing correlations
Objective and subjective rankings of scientific journals in the field of ergonomics: 2004–2005
Human Factors and Ergonomics in Manufacturing
Scopus journal metrics
A cluster analysis of scholar and journal bibliometric indicators
Journal of the American Society for Information Science and Technology
An alternative interpretation of recent political science journal evaluations
PS: Political Science & Politics
Ranking political science journals: Reputational and citational approaches
PS: Political Science & Politics
The use and valuation of journals in planning scholarship: Peer assessment versus impact factors
Journal of Planning Education and Research
Google scholar metrics
Evaluating operations management-related journals via the Author Affiliation Index
Manufacturing & Service Operations Management
Going beyond citations: SERUM—A new tool provided by a network of libraries
LIBER Quarterly
Citation analysis and peer ranking of Australian social science journals
Scientometrics
Statistical methods for psychology
Cited by (24)
Index on relative sustainability impact-a suggestive tool for strengthening regional cooperation: Case of South Asia
2021, Research in GlobalizationCitation Excerpt :In subjective methods, the weight of attributes is principally based on judgment, experience, and preference of the experts or researchers (Liu et al., 2016; Alemi-Ardakani et al., 2016). Surveys, questionnaires, and interviews are a few of the techniques adopted to obtain the required information and data for the evaluation of these subjective methods (Tavana et al., 2015; Walters, 2017). Broadly, there are two categories of subjective weighting: Direct weighting procedure and Pair-wise comparison techniques.
Zero-based serials review: An objective, comprehensive method of selecting full-text journal resources in response to local needs
2020, Journal of Academic LibrarianshipCitation Excerpt :Our review method, like many others, relies heavily on the perceptions of faculty. Although we know that those perceptions are not strongly related to scholarly impact, the relationships between objective and subjective measures of journal “quality” remain largely unexplored (Ahlgren & Waltman, 2014; Cahn, 2014; Haddawy et al., 2016; Knowlton et al., 2014; Walters, 2017b; Walters & Markgren, 2019). It is also unclear whether subjective opinions of journals vary systematically among individuals—whether faculty at research universities have consistently different opinions than those at undergraduate colleges, for instance.
Can we automate expert-based journal rankings? Analysis of the Finnish publication indicator
2020, Journal of InformetricsCitation Excerpt :The first problem arises because expert-based evaluations are expensive and not straightforward (Thelwall, 2017). The latter issue might be caused by the tendency of reviewers to deliberately or unintentionally evaluate according to their own (research) interests, a biased selection of reviewers, or other conflicts of interest (Dondio, Casnici, Grimaldo, Gilbert, & Squazzoni, 2019; Haddawy, Hassan, Asghar, & Amin, 2016; Saarela, Kärkkäinen, Lahtonen, & Rossi, 2016; Serenko & Bontis, 2018; Walters, 2017; Zacharewicz et al., 2019). Especially if a publication is published in a relatively unusual national language (such as Finnish, where experts who actually speak this language might compose only a very small group), it can be challenging to find knowledgeable reviewers who do not have any conflict of interest (Letto-Vanamo, 2019).
Composite journal rankings in library and information science: A factor analytic approach
2017, Journal of Academic LibrarianshipCitation Excerpt :Different ranking methods can lead to different results, however. Citation-based rankings are not always consistent with stated preference rankings, and the ratings assigned by library directors do not always match those assigned by the deans of MLIS programs (Kim, 1991; Kohl & Davis, 1985; Nisonger & Davis, 2005; Walters, 2017a). This study presents a factor analysis of journal ranking metrics based on data for 55 LIS journals.