Comparing the expert survey and citation impact journal ranking methods: Example from the field of Artificial Intelligence
Highlights
► We developed a ranking of 182 peer-reviewed Artificial Intelligence journals by surveying 873 active AI researchers. ► We compared the ranking lists developed through expert surveys and journal citation measures. ► Expert surveys and citation impact journal ranking methods cannot be used as substitutes. ► In their ranking decisions, journal quality respondents are strongly influenced by their current research interests. ► The application of the expert survey method favors journals that publish more articles per year.
Section snippets
Introduction and literature review
The contemporary society is fascinated with rankings. Rankings exist in all areas of human activities. Anecdotal evidence suggests that everything that can be possibly ranked has already been ranked. Examples include but are not limited to sports stars, celebrities, websites, technologies, movies, songs, business people, cars, and even countries. Academia also jumped on the bandwagon a long time ago; there are various rankings of universities, programs, departments and individual researchers,
Method
Sampling is one of the most critical issues in journal quality surveys. The key objective is to ensure that each outlet is represented by the same number of people who published at least once in it. If, for example, respondents are recruited from academic associations, conferences or distribution lists, they may favor specific research topics or outlets sponsored by these bodies and rank respective journals higher. To avoid this situation, names and emails of 30 authors were randomly chosen
Implications
The purpose of this study was to develop a ranking of peer-reviewed AI journals, compare the consistency of journal ranking lists created by means of expert surveys and citation-impact measures, and investigate whether personal and demographic journal rater characteristics, such as country/region of residence, educational background, major research area, years of academic experience and gender, affect their ranking decisions. For this, 182 AI journals were rated by 873 AI researchers. Based on
Conclusion
In this study, a ranking of 182 peer-reviewed journals from the field of Artificial Intelligence was constructed based on the survey results from 873 AI researchers who published at least once in one of these outlets. The final ranking was compared with those based on the family of h-indices obtained from Google Scholar, and some differences between the methods were highlighted and explained. It was concluded that these techniques cannot be used as substitutes; instead they may be used to
References (74)
- et al.
A multi-method evaluation of journals in the decision and management sciences by US academics
Omega
(2000) - et al.
An AHP analysis of quality in AI and DSS journals
Omega
(2002) - et al.
A multiple criteria assessment of decision technology system journal quality
Information & Management
(2001) Journal influence factors
Journal of Informetrics
(2010)Journal self-citations – Analysing the JIF mechanism
Journal of Informetrics
(2007)- et al.
An empirical assessment and categorization of journals relevant to DSS research
Decision Support Systems
(1995) The development of an AI journal ranking based on the revealed preference approach
Journal of Informetrics
(2010)- et al.
What's familiar is excellent: The impact of exposure effect on perceived journal quality
Journal of Informetrics
(2011) Ranking forestry journals using the h-index
Journal of Informetrics
(2008)- et al.
Forums for information systems scholars
Information & Management
(2001)