Elsevier

Computers & Graphics

Volume 89, June 2020, Pages 77-87
Computers & Graphics

Special Section on STAG 2019
ReviewerNet: A visualization platform for the selection of academic reviewers

https://doi.org/10.1016/j.cag.2020.04.006Get rights and content

Highlights

  • An integrated visualization of scholarly data can support the academic reviewer search process.

  • The visualization of scholarly data helps to avoid conflicts of interest and to build a fairly distributed pool of reviewers.

  • A well-combined visualization of citations and co-authorship relations only, can reduce the need for complicated content analysis techniques.

  • The platform evaluation with members from the Computer Graphics community demonstrated the improvement on the traditional process for searching reviewers.

  • The evaluation confirmed that users were able to get acquainted with the system with a very limited training.

Abstract

We propose ReviewerNet, an online, interactive visualization system aimed to improve the reviewer selection process in the academic domain. Given a paper submitted for publication, we assume that good candidate reviewers can be chosen among the authors of a small set of pertinent papers; ReviewerNet supports the construction of such set of papers, by visualizing and exploring a literature citation network. The system helps journal editors and Program Committee members to select reviewers that do not have any conflict-of-interest and are representative of different research groups, by visualising the careers and co-authorship relations of candidate reviewers. The system is publicly available, and is demonstrated in the field of Computer Graphics.

Introduction

The number of digital academic documents, either newly published papers or documents resulting from digitization efforts, grows at a very fast pace: the Scopus digital repository counts more than 75 million records [1]; the Web of Science platform has more than 166 million records from journals, books, and proceedings [2]; Microsoft Academic collects about 232 million publications [3]. In 2019, over four hundred thousand new records in Computer Science were added to DBLP [4]. Bibliometric analysts estimate a doubling of global scientific output roughly every nine years [5]. Therefore, the volume, variety and velocity of scholarly documents generated satisfies the big data definition, so that we can now talk about big scholarly data [6].

Sensemaking in this huge reservoir of data calls for platforms adding an element of automation to standard procedures – such as literature search, expert finding, or collaborators discovery – to reduce the time and effort spent by scholars and researchers. In particular, there has been an increase in the number of visual approaches supporting the analysis of scholarly data. Visualization techniques were proposed to help stakeholders to get a general understanding of sets of documents, to navigate them, and to find patterns in publications and citations. Federico et al. [7] survey about 109 visual approaches for analysing scientific literature and patents published in-between 1991 and 2016. Most of the works focused on the visualization of document collections and citation networks. A more ambitious goal for visualization platforms would be to enable users to get enough understanding to make decisions.

In this paper, we focus on the problem of reviewer finding by journal editors or International Program Committee (IPC) members. On the one hand, reviewers are expected to know a subject well, and have enough expertise to fairly judge the work of colleagues. On the other hand, reviewers should not have any conflict-of-interest with the authors of the paper under scrutiny, where conflict means that the professional evaluation may be compromised due to a working relationship between reviewers and authors. Therefore, finding good candidate reviewers requires editors and IPC members to analyse many aspects simultaneously: topic coverage (possibly over time), stage of career, and past and ongoing collaborations. Every member of the community has his/her own approach to reviewer finding, which usually involves bibliographic research, and frequent visits to public repositories like DBLP [8] and researchers’ home pages. In any case, one has to confront possibly large collections of data to make decisions, and a user may easily get lost after following a few links.

We propose ReviewerNet, a visualization platform which facilitates the selection of reviewers. ReviewerNet builds on a reference database including papers, authors and citations from selected sources (journal articles and conference papers) taken from the Semantic Scholar Research Corpus [9]. ReviewerNet offers an interactive visualization of multiple, coordinated views about papers and researchers that help assessing the expertise and conflict of interest of candidate reviewers. The interface is shown in Fig. 1. The intuition behind ReviewerNet is that the authors of relevant papers are good candidate reviewers. One of the main advantages of ReviewerNet is that it only relies on citations, to analyse the literature, and on co-authorship relations, to analyse conflicts. Citations are an essential part of research: they represent a credible source of information about topic similarity and intellectual influence. Moreover, since citations have author-chosen reliability, they are a very robust cue to relatedness. Similar reasonings hold for co-authorship relations. Therefore, an important contribution is the demonstration that a well-combined visualization based on citation and co-authorship relations only can support the reviewer search process, without the need for more complicated content analysis techniques.

A preliminary version of the platform was presented at the 2019 edition of the Smart Tools and Applications in Graphics conference [10]. After validating the platform with a pool of senior researchers in Computer Graphics (see Section 5), we incorporated their feedback to improve the tool. Therefore, the present article describes the updated platform, which features both a revisited interface and new functionalities:

  • 1.

    improved visualization of the network of papers, with a different network layout for better investigation and interaction;

  • 2.

    improved visualization of the network of researchers, for better interactive community discovery;

  • 3.

    improved consistency of visual elements;

  • 4.

    a novel initialization procedure to complement the manual insertion of paper titles, by importing and automatically parsing a list of bibliographic references;

  • 5.

    the possibility for users to generate personalized instances of the platform, in any academic domain, by specifying a list of venues of interest.

We demonstrate the platform usage in the field of Computer Graphics, on a reference dataset containing 17.754 papers, 108.155 citations, 23,386 authors. We show how ReviewerNet can be used to search for reviewers who are expert on a certain topic, are at a certain career stage, who have a certain track of publishing records, who are not conflicting with neither the submitters nor other reviewers, and who are well-distributed in the scientific community.

The tool is free to use and open source; the source code is available at https://github.com/cnr-isti-vclab/ReviewerNet, while the demonstration platform is available at https://reviewernet.org/.

Section snippets

Related work

Concerning the reviewer selection process, the literature mostly focused on the automatic reviewer assignment task, which is a different problem than ours. Indeed, the reviewer assignment problem requires finding the best assignment between a finite set of reviewers (e.g., the members of the Programme Committee of a conference) and a finite set of papers (the papers submitted to the conference). This is usually done using bipartite graph matching and taking into account pertinence of the

ReviewerNet description

ReviewerNet supports the various actions that journal editors and IPC members perform while choosing reviewers, namely, searching the literature about the submission topic, looking for active experts in the field, and checking their conflicts of interest. ReviewerNet does so by integrating an overview visualization of the literature with a visualization of the career of potential reviewers, their conflicts of interests, and their nets of collaborators. This combined visualization helps to make

Technical details

In ReviewerNet, the visualized data pertain to three types of entities: papers, researchers, and citations. The data attributes are both quantitative and qualitative, and the time dimension is central.

Concerning papers, let P denote the set of papers in a reference dataset, and let PVP be the set of papers relevant to a submission. PV is built by the users starting from a small number of seed papers of their choice. A paper pPV is marked as selected, if it is considered as a key paper by the

Evaluation

We evaluated the preliminary version of ReviewerNet described in [10] on the dataset focused on Computer Graphics described in Section 3.1. We decided to ask the scientific community directly, and involve real end-users instead of in-house testers. We sent an email to the 60 members of the IPC of Eurographics Conference 2019, and to additional experts with a record of publications in the top venues of the sector. None of the subjects were involved in the work on ReviewerNet, and none of them

Conclusions

We have presented ReviewerNet, a novel system for choosing reviewers by visually exploring scholarly data. ReviewerNet enables scientific journal editors and members of IPCs to search the literature about the topic of a submitted paper, to identify experts in the field and evaluate their stage of career, and to check possible connections with the submitting authors and among the reviewers themselves. This helps to avoid conflicts and to build a fairly distributed pool of reviewers. To do so,

CRediT authorship contribution statement

Mario Salinas: Conceptualization, Methodology, Software. Daniela Giorgi: Conceptualization, Methodology, Writing - original draft. Federico Ponchio: Conceptualization, Methodology, Software. Paolo Cignoni: Conceptualization, Methodology, Writing - original draft, Supervision.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References (25)

  • S. Khan et al.

    A survey on scholarly data: from big data perspective

    Inf Process Manag

    (2017)
  • N.J. van Eck et al.

    Citenetexplorer: a new software tool for analyzing and visualizing citation networks

    J Informetr

    (2014)
  • Scopus fact sheet. 2019. Accessed on February 14th, 2020,...
  • Web of science platform: summary of coverage. 2020. Accessed on February 14th, 2020,...
  • Microsoft academic. 2020. Accessed on February 14th, 2020,...
  • Dblp statistics – new records per year. 2019. Accessed on February 14th, 2020,...
  • L. Bornmann et al.

    Growth rates of modern science: a bibliometric analysis based on the number of publications and cited references

    J Assoc Inf Sci Technol

    (2015)
  • P. Federico et al.

    A survey on visual approaches for analyzing scientific literature and patents

    IEEE Trans Vis Comput Graph

    (2017)
  • M. Ley

    The DBLP computer science bibliography: evolution, research issues, perspectives

    International symposium on string processing and information retrieval

    (2002)
  • W. Ammar et al.

    Construction of the literature graph in semantic scholar

    NAACL

    (2018)
  • M. Salinas et al.

    A visualization tool for scholarly data

    Eurographics conference on smart tools and applications in graphicsl

    (2019)
  • F. Wang et al.

    A comprehensive survey of the reviewer assignment problem

    Int J Inf TechnolDecis Mak

    (2010)
  • Cited by (6)

    View full text