Elsevier

Journal of Informetrics

Volume 5, Issue 4, October 2011, Pages 629-648
Journal of Informetrics

Comparing the expert survey and citation impact journal ranking methods: Example from the field of Artificial Intelligence

https://doi.org/10.1016/j.joi.2011.06.002Get rights and content

Abstract

The purpose of this study is to: (1) develop a ranking of peer-reviewed AI journals; (2) compare the consistency of journal rankings developed with two dominant ranking techniques, expert surveys and journal impact measures; and (3) investigate the consistency of journal ranking scores assigned by different categories of expert judges. The ranking was constructed based on the survey of 873 active AI researchers who ranked the overall quality of 182 peer-reviewed AI journals. It is concluded that expert surveys and citation impact journal ranking methods cannot be used as substitutes. Instead, they should be used as complementary approaches. The key problem of the expert survey ranking technique is that in their ranking decisions, respondents are strongly influenced by their current research interests. As a result, their scores merely reflect their present research preferences rather than an objective assessment of each journal's quality. In addition, the application of the expert survey method favors journals that publish more articles per year.

Highlights

► We developed a ranking of 182 peer-reviewed Artificial Intelligence journals by surveying 873 active AI researchers. ► We compared the ranking lists developed through expert surveys and journal citation measures. ► Expert surveys and citation impact journal ranking methods cannot be used as substitutes. ► In their ranking decisions, journal quality respondents are strongly influenced by their current research interests. ► The application of the expert survey method favors journals that publish more articles per year.

Section snippets

Introduction and literature review

The contemporary society is fascinated with rankings. Rankings exist in all areas of human activities. Anecdotal evidence suggests that everything that can be possibly ranked has already been ranked. Examples include but are not limited to sports stars, celebrities, websites, technologies, movies, songs, business people, cars, and even countries. Academia also jumped on the bandwagon a long time ago; there are various rankings of universities, programs, departments and individual researchers,

Method

Sampling is one of the most critical issues in journal quality surveys. The key objective is to ensure that each outlet is represented by the same number of people who published at least once in it. If, for example, respondents are recruited from academic associations, conferences or distribution lists, they may favor specific research topics or outlets sponsored by these bodies and rank respective journals higher. To avoid this situation, names and emails of 30 authors were randomly chosen

Implications

The purpose of this study was to develop a ranking of peer-reviewed AI journals, compare the consistency of journal ranking lists created by means of expert surveys and citation-impact measures, and investigate whether personal and demographic journal rater characteristics, such as country/region of residence, educational background, major research area, years of academic experience and gender, affect their ranking decisions. For this, 182 AI journals were rated by 873 AI researchers. Based on

Conclusion

In this study, a ranking of 182 peer-reviewed journals from the field of Artificial Intelligence was constructed based on the survey results from 873 AI researchers who published at least once in one of these outlets. The final ranking was compared with those based on the family of h-indices obtained from Google Scholar, and some differences between the methods were highlighted and explained. It was concluded that these techniques cannot be used as substitutes; instead they may be used to

References (74)

  • N. Adler et al.

    When knowledge wins: Transcending the sense and nonsense of academic rankings

    Academy of Management Learning & Education

    (2009)
  • P. Bharati et al.

    Global perceptions of journals publishing e-commerce research

    Communications of the ACM

    (2002)
  • I. Bonev

    Should we take Journal Impact Factors seriously?

    ParalleMIC

    (2009)
  • N. Bontis et al.

    A follow-up ranking of academic journals

    Journal of Knowledge Management

    (2009)
  • L. Butler

    Identifying ‘highly-rated’ journals – An Australian case study

    Scientometrics

    (2002)
  • M.C. Calver et al.

    Should we use the mean citations per paper to summarise a journal's impact or to rank journals in the same field?

    Scientometrics

    (2009)
  • J.C. Catling et al.

    Quality is in the eye of the beholder? An evaluation of impact factors and perception of journal prestige in the UK

    Scientometrics

    (2009)
  • C.H. Cheng et al.

    The impact of periodicals on expert systems research

    IEEE Expert: Intelligent Systems and Their Applications

    (1994)
  • C.H. Cheng et al.

    Citation-based journal rankings for AI research: A business perspective

    AI Magazine

    (1996)
  • R. Coe et al.

    Evaluating the management journals: A second look

    Academy of Management Journal

    (1984)
  • F.L. DuBois et al.

    Ranking the international business journals

    Journal of International Business Studies

    (2000)
  • J. Dul et al.

    Objective and subjective rankings of scientific journals in the field of ergonomics: 2004–2005

    Human Factors and Ergonomics in Manufacturing

    (2005)
  • L. Egghe

    Theory and practise of the g-index

    Scientometrics

    (2006)
  • M.R. Elkins et al.

    Correlation between the Journal Impact Factor and three other journal citation indices

    Scientometrics

    (2010)
  • H. Etzkowitz et al.

    Athena unbound: The advancement of women in science and technology

    (2000)
  • M.F. Fox

    Gender, family characteristics, and publication productivity among scientists

    Social Studies of Science

    (2005)
  • M. Gillenson et al.

    Journal rankings 2008: A synthesis of studies

  • M.L. Gillenson et al.

    Academic issues in MIS: Journals and books

    MIS Quarterly

    (1991)
  • H. Goldstein et al.

    The use and valuation of journals in planning scholarship: Peer assessment versus impact factors

    Journal of Planning Education and Research

    (2010)
  • L.R. Gomez-Mejia et al.

    Determinants of faculty pay: An agency theory perspective

    The Academy of Management Journal

    (1992)
  • R.K. Goodyear et al.

    The intellectual foundations of education: Core journals and their impacts on scholarship and practice

    Educational Researcher

    (2009)
  • M.D. Gordon

    Citation ranking versus subjective evaluation in the determination of journal hierarchies in the social sciences

    Journal of the American Society for Information Science

    (1982)
  • G. Haddow et al.

    Citation analysis and peer ranking of Australian social science journals

    Scientometrics

    (2010)
  • A.-W. Harzing et al.

    Google Scholar as a new source for citation analysis

    Ethics in Science and Environmental Politics

    (2008)
  • J.E. Hirsch

    An index to quantify an individual's scientific research output

    Proceedings of the National Academy of Sciences of the United States of America

    (2005)
  • D.R. Hodge et al.

    Evaluating journal quality: Is the h-index a better measure than impact factors?

    Research on Social Work Practice

    (2010)
  • T.S. Jacques et al.

    The impact of article titles on citation hits: An analysis of general and specialist medical journals

    Journal of the Royal Society of Medicine

    (2010)
  • Cited by (0)

    View full text