An overview of innovations in the external peer review of journal manuscripts.

Background: There are currently numerous innovations in peer review and quality assurance in scholarly publishing. The Research on Research Institute conducted a programme of co-produced projects investigating these innovations. This literature review was part of one such project ‘Experiments in peer review’ which created an inventory and framework of peer review innovations. The aim of this literature review was to aid the development of the inventory by identifying innovations in the external peer review of journal manuscripts reported in the scholarly literature and by providing a summary of the different approaches. This did not include interventions in editorial processes. Methods: This review of reviews is based on data identified from Web of Science and Scopus limited from 2010 to 2021. A total of 291 records were screened, with six review articles chosen for the focus of the literature review. Items were selected that described approaches to innovating peer review or illustrated examples. Results: The overview of innovations are drawn from six review articles. The innovations are divided into three high-level categories: approaches to peer review, reviewer focussed initiatives and technology to support peer review with sub-categories of results presented in tabular form and summarised. A summary of all innovations found is also presented. Conclusions: From a simple synthesis of the review authors’ conclusions, three key messages are presented: observations on current practice; authors’ views on the implications of innovations in peer review; and calls for action in peer review research and practice.


Introduction
The Research on Research Institute (RoRI) conducted a number of co-produced projects with academic publishers and scholarly communication service providers. These projects investigated current experiments and innovations in quality assurance and peer review in scholarly publishing. This literature review is part of one such project entitled 'Experiments in Peer Review'.
The 'Experiments in Peer Review' project aimed: • to identify, analyse, and evaluate current innovations in peer review and other forms of quality control/assurance of research outputs • to assess their potential impacts on scholarly communication in particular and the research environment in general.
The first phase of this project was to create an inventory and framework of experiments in peer review carried out by publishers and other scholarly communication organisations. The inventory is based on a widely distributed survey of scholarly publishers designed to retrieve information on current innovations at grass roots level (Kaltenbrunner et al., 2022). The purpose of this literature review is to provide context and aid the development of the inventory. To do so, we identify publications reporting innovations or experiments with peer review in scholarly publishing and we create a summary of the different types of initiative as identified in the literature.

Definitions, framing and limitations
In this review the definition of peer review reflects the use of the term in the literature selected. That is an inclusive and broad interpretation of the phrase to include many aspects of evaluation and quality assessment. So, whilst a 'peer' is commonly understood as an individual researcher with significant expertise and interest in a given field, 'peer review' includes the actions of other stakeholders or their agents within the system such as copyeditors and formatters, artificial intelligence (AI) software, members of the public, patients, advocates and lobbyists. Peer review, used in this broad sense, also encapsulates activities designed to ensure research integrity such as plagiarism checks and monitoring compliance with data management policies. Similarly, peer review also includes informal responses, questions and comments posted on social media, pre-print servers, e-journals or other places online in response to a given research output. These types of informal responses were found in a study examining disciplinary knowledge production (Woods, 2018), where examples of researchers' peer review practice were identified. The results of group interviews with researchers in applied fields identified that several other types of 'peer review' which did not involve scientific experts, but members of the public, those from other professions, advocacy and lobbying groups were also common occurrences. Over 20 years ago Barnett (2000) spoke about knowledge becoming a commodity tested by consumer reaction. Particularly in applied fields, this is coming to fruition, with this type of research having a greater number of consumers, invoking greater and different types of reaction and review (Hoepner, 2019).
The scope of this review is restricted to peer review in scholarly publishing, although similar observations about the benefits and limitations of peer review are found in publications concerned with other parts of the research system. One example is the onerous nature of peer review, which is a factor also associated with peer review in research funding (Bendiscioli & Garfinkel, 2021), and research assessment (Wilsdon, 2015), in addition to scholarly publishing (Smith, 2006). It is also worth noting that this literature review is concerned with what people are doing, and what the innovations entail rather than why they are doing it. The motivations giving rise to innovation, for example to reduce bias or achieve greater efficiency are peripheral to the purpose of this study.
This literature review describes innovations. In a similar treatment to the interpretation of the term 'peer review' we adopt a broad interpretation of the term 'innovation'. The definition proposed by Rogers (2003) is fitting for this review: 'an idea, practice, or object that is perceived as new by an individual or other unit of adoption' (p. 26). That is, the status of something as an innovation does not rest on the date of its inception, it could be in existence for 4 hours or 40 years, what makes it an innovation is if the practice or idea is new to those who are recommending or suggesting its use in their particular context. Within the academic publishing industry, what may be considered an innovation in one organisation, such as reviewers and authors being blind to each other's identity, would no longer be considered an innovation in another. Broadly speaking, an innovation can be implemented in different ways within an organisation: first, by intervening in usual practice; second, by intervening on a smaller scale, in one area of work, to test something out before implementing it more widely; or third, in setting up a separate innovative project or initiative, outside existing processes. In this review all types of innovations are included, and the review is agnostic to breadth of implementation. All types of innovations are captured that were reported in the included studies, this includes potential models of peer review which are untried.

Amendments from Version 1
We have updated the title and abstract to more accurately reflect the scope of the article. We have also added a limitations section which states the focus of the review, namely on published literature and on innovations in review of journal manuscripts, and on innovations in the external peer review process, not including innovations in the editorial process. In the methods section, we have provided a citation to give a definition of the different review types included in the article. We have removed the term 'meta-summary' and defined the article as an overview (review of reviews) and provided a citation to define this term in the methods section. We have revisited each included study and included information on quality assessment where it was found in the included articles. We have provided a new, simplified, and accurate flow chart. We have refined our definitions of 'peer review' and 'innovation' to be more precise and better aligned with their usage in the article.
Any further responses from the reviewers can be found at the end of the article

REVISED
Like any scholarly work, this literature review is limited by its scope and data sample. It is an overview of approaches to external peer review of journal manuscripts. It is based on searches of academic bibliographic databases and does not include evidence identified through grey literature searches. The source documents are six literature reviews and the article provides an overview of the peer review approaches stated in these reviews. Reviewing these overview articles enabled generic categories to be created to encompass broad types of intervention, such as training or other support for peer reviewers. In line with our aims, not every individual example or implementation of these broad types of initiative was found in the included papers. Also, given the data sources, some approaches to peer review may have been omitted, such as volunteering to review.
In addition, the review focuses on the peer review process involving external reviewers, rather than the editorial side of the review process, hence examples of innovations of this nature are not included.
This review of reviews does not contain a quality assessment of the included review papers. Where a quality assessment of primary studies was performed within the included reviews, we have indicated this in the descriptions of the included studies.
The next section will describe the methodology used in this literature review and how we will present the results. This is followed by the results themselves in which we classify different types of innovations. We conclude with some discursive reflections on the current situation.

Methodology
Overall approach This review was undertaken to set the context and inform an empirical study (Kaltenbrunner et al., 2022). To complete this task, it was not necessary to identify every publication discussing peer review, rather to capture as many different forms of peer review discussed as possible. With this in mind we did not limit the search to a particular publication type and our search results included several review articles. On closer inspection of these articles, it was clear that they covered all the peer review innovations identified from screening and coding the results of the literature search. This is with the possible exception of modelling or scientometric studies examining aspects of the peer review system (Ortega, 2017; Ragone et al., 2013) or proposing a framework for best practice for academic publishing (Waters et al., 2020) or audit of publishing processes (Crewe, 2020). However, these papers were slightly out of scope for the remit of the empirical work. Therefore, the decision was made to present the data through the organising structure of six recent literature reviews on the topic, as an overview 'review of reviews' ( Of high importance to this review and RoRI's work with scholarly publishers is the Reimagine Review registry set up by Accelerating Science and Publication in biology (ASAPbio). The projects included in this registry provide live examples of the types of peer review innovations summarised in this review, such as post-publication review and pre-print review. More details of the registry are provided in Box 1.

Box 1. ASAPbio's Reimagine Review
Reimagine Review is a registry of peer review experiments. As of January 6, 2023, it includes 62 registered projects.
The registry is presented as a searchable database. The user is able to filter the records in various ways including type of output, who initiates the review, whether the reviews are stand alone or linked to a specific publication, the level of transparency or ' openness', whether a decision is made at the end of the review (to publish or not), discipline, format of reviewing (such as comments or scores) and some characteristics of the process (for example, if professional editors are used, if comments are moderated). Alternatively, the top page enables authors to choose by output: pre-print, articles already accepted for publication by a journal, privately shared manuscripts and finally ' other outputs' such as protocols, data sets etc. This enables authors to choose the most appropriate service that fits with their needs. The types of innovation featured in the Reimagine Review inventory (ASAPbio, 2021) such as post-publication review have been included in this review.

Data collection
A literature search was conducted in Web of Science and Scopus to identify relevant papers using synonyms for 'peer review' and 'innovation', records were screened to exclude studies that did not describe a type of innovation in peer review, or gave an example of a specific innovation. Further records were retrieved using citation searches of the remaining relevant records. Following these search and screening iterations, 68 records were initially included. The full texts of these records were retrieved and screened and six review articles were chosen for the focus of the literature review. Searches were conducted in January 2021, limited from 2010 and no study filters were applied. An example search strategy is presented below.

Study selection / coding
Search results were initially downloaded to EndNote (X9.3.2) to facilitate de-duplication after which study selection was completed in the Rayyan software Ouzzani et al. (2016). This allowed easy viewing of decision making by the project team. Initial categories of innovations were developed to include overview articles, types of peer review, reviewer focussed initiatives, technological initiatives and specific uses of peer review (such as use of language or plagiarism). This exercise of developing topic categories aided the organisation of material in the review. As previously stated, several review articles were identified in this process and on closer inspection of these articles, it was clear that they covered all the peer review innovations identified from screening and coding the results of the literature search. The review therefore focussed on six literature reviews in a review of reviews format. Please note an earlier version of this article can be found on SocArXiv (doi: 10.31235/osf.io/qaksd).

Presentation of results
The results are presented using narrative and tabular formats, followed by a summary of the review and conclusions. The results section begins with a description of included studies, followed by a detailed description of innovations in peer review types using three high level categories: approaches to peer review, reviewer focussed initiatives, and technology to support peer review. This includes definitions of each type of innovation extracted from the included studies. A summary table of each innovation and where these have been reported is provided at the end of the review. See Figure 1 for a PRISMA diagram giving details of the search process.

Details of included studies
The review includes six overview studies which range from a systematic review of randomised controlled trials to a succinct summary of review innovations. All the studies were valuable in capturing the different types of current innovation in peer review. Each study is described below, presented in chronological order. This is followed by a narrative summary of each type of innovation, followed by a summary  , 1998) to assess the different outcome measures: final manuscript quality, quality of the peer review report, rejection rate, time spent on peer review, and time spent on the peer review process. The authors found, based on these outcome measures, that compared with standard peer review, reviewer training was not successful in improving the quality of the peer review report and use of checklists by peer reviewers to check the quality of a manuscript did not improve the quality of the final article. However, the addition of a specialised statistical reviewer did improve the quality of the final article and open peer review was also successful in improving the quality of review reports. It did not affect the time reviewers spent on their report. Open peer review also decreased the number of papers rejected. Finally, blinded peer review did not affect the quality of review reports or rejection rates. The authors conclude that there is a lack of evidence on the effectiveness of various peer review procedures, especially given its central role in science.
Tennant et al. (2017) is a narrative review published in F1000 Research. Its process is notable as the review has 33 authors. They are experts in scholarly publishing, and it is these expertise which formed the basis of the review. The authors identified papers through searching databases such as Google Scholar, Scopus, and Library & Information Science Abstracts (LISA). Additionally, there was a lengthy peer review process which was published with the review including reviewers reports and authors responses, a key part of the F1000 model (and itself an innovation in peer review). The authors review the history of peer review, its myriad shortcomings and details the potential solutions that have been developed to date to address these challenges. The pros and cons of newer innovations in peer review such as portable, de-coupled and collaborative peer review are then discussed as well as an examination of different levels of anonymity. They then go on to focus on a number of social web platforms, such as Reddit and expand on the benefits and limitations of each platform considering three criteria: quality control and moderation, certification, and incentive structures. Alongside the review of new innovations, the authors are clear to signal that there are particular benefits of peer review and that it has deep and far-reaching cultural significance within research practices which should not be underestimated. A hybrid model is suggested combining aspects of different platforms. The authors stress that any such innovation cannot succeed without engagement from researchers, but this is in tension with the structure of researcher incentives in the research system. The review concludes with two main points, one to decouple peer review from journal publishing in order to return to what the authors suggest would be a community-led process. Secondly, there is very little evidence to support the uptake of different methods of peer review, so research to measure the effectiveness of these different approaches in achieving different goals of peer review is essential. Three key initiatives are referred to as leaders in this respect: the PEERE initiative, the Research Integrity and Peer Review journal, and the International Congress on Peer Review and Scientific Publication.
Burley (2017) is a narrative summary published in Information Services and Use which reviews new approaches to peer review in scholarly publishing. The author is affiliated to the BMC Group at the Springer Nature Publishing Company. The article does not refer to scholarly literature, but refers to new innovations and ways of working in practice. The article begins by stating the benefits of peer review by key stakeholders. The author then goes on to summarise newer practices such as the increase in double-blind peer review in scientific disciplines. She then discusses open peer review, post-publication peer review and transparent peer review. Finally, initiatives aimed at increased efficiency such as cascading peer review and sectional or partial peer review are discussed. The article concludes by highlighting the increased focus of rewarding and training reviewers by scholarly publishers and learned societies and the overall improvement in reviewer recognition and efficiency, as well as the increase in experiments in transparent peer review.
Horbach & Halffman (2018) is a narrative review published in Research Integrity and Peer Review. It aims to describe current forms of peer review and their implementation and also consider the role and expectations of peer review. The authors present an historical account of peer review and describe methods of peer review to date, including recent technological advances. Four dimensions are identified in peer review innovations: 'the selection conditions [such as the timing of the review], the identity and access among actors involved, the level of specialisation in the review process, and the extent to which technological tools have been introduced' (p. 9). The authors then present a typology of peer review characteristics ordered by these four dimensions. This is followed by a discussion on the role of the academic publishing system and expectations of peer review. The authors underline the large diversity of review processes currently used. They also suggest four key expectations for peer review: 'assuring quality and accuracy of research, establishing a hierarchy of published work, providing fair and equal opportunities to all actors and assuring a fraud-free research record' (p. 12). The article concludes by highlighting the lack of empirical evidence to test the efficacy of peer review methods and the tensions that exist between what peer review can deliver and what is expected of it, for example its ability to identify fraudulent research or methodological errors. The authors suggest there is a new, additional perspective, in how research knowledge is perceived, fuelled by statistical reviews, post-publication reviews and other innovations, from a library of knowledge, to a set of scientific facts. They suggest that this perception of research as 100% accurate knowledge fuels retractions and rewriting of documents to create seemingly perfect accounts. article centres on the current status of peer review and its role in scholarly publishing in a digital age. The author states a conceptual difference between peer review as an idea, or 'a singular ideologue' (p. 2) and a practice. Open peer review is cited as a return to the original purpose of peer review to be collegial, constructive, to improve arguments and gaps in logic. The author states that peer review now has an additional gatekeeping function, and it is also used by commercial organisations as a selling point. He goes on to summarise the benefits and drawbacks of new models of peer review considering what the job of peer review is and how these functions can be achieved in the future, making better use of the technology we now have. Success would be an open participative model of peer review that is a genuine alternative rather than an add-on to the status quo. However, the author also states that it is difficult to separate the value and prestige that comes from publishing in journals, and this drives particular behaviours and limits uptake of new models. He refers to the 'penguin effect' (Choi, 1997) where the level of perceived risk is greater than the motivation to change. This effect is compounded by the fact that moving away from current practices is often not in sync with the behaviours required for researchers' job security and career progression. The author concludes by advocating a new framework based on current technological and communication norms by revisiting the core purposes of peer review, links to incentives for researchers and a clear consideration of how all stakeholders fit into any new system. ) he briefly discusses delays to peer review and the issue of anonymity as manifested in 'blinding' of reviewers. Finally, the discussion turns to reviewer incentives and training. The author concludes that the increase of innovations has been rapid and there is a lack of evidence as to how effective newer methods of review are in identifying research malpractice. He also suggests that review quality may be compromised where financial incentives are given. He advocates for an honest appraisal of stakeholder's contribution to the process with reviewer training, core competencies for reviewers and engagement by the research community on this issue being paramount.

Barroga (2020) is a narrative review published in the
See Table 1 for a summary of the general characteristics of the studies included.

Description of innovations in peer review
The innovations presented below are divided into three high-level categories: approaches to peer review, reviewer focussed initiatives and technology to support peer review. This section begins with narrative summaries of the different approaches to peer review identified, organised into the following subcategories:

Pre/post publication review
The key feature of peer review innovations in this subcategory is the timing of the review. The authors' definitions reveal a mixture of informal and formal peer review, open and confidential modes, expert and lay commentary. Barroga (2020) (after Tennant et al., 2017) distinguishes between pre and post publication review and commenting. The key difference between review and commenting is who is responding to the publication. In pre and post publication review, this is field experts. In pre and post publication commenting, this amounts to comments or feedback by any interested party, irrespective of their academic or disciplinary credentials. Burley (2017) also describes post-publication review as taking place after publication, in an open manner. In an interesting use of the term 'post-publication' Horbach & Halffman (2018) describe post-publication peer review on pre-print servers. This clearly problematises the established use of the word 'publish' to mean publication in a journal or monograph. If something has been posted on a pre-print server, then it is published, albeit in a self-published mode, incurring only initial checks for eligibility to be posted on the particular pre-print service.
One other review innovation in this subcategory is the use of registered reports or similar approaches. This is where a research design is evaluated before the research has begun. It typically applies to quantitative empirical research that follows a fixed a priori design. Once a study is designed, the protocol is reviewed, before any data is collected. The value of this method is to reduce questionable research practices, where researchers deviate from their original intention and methodology and indulge in malpractice such as p-hacking and cherry-picking results to create more eye-catching conclusions. Registered reports are championed by the Center for Open Science amongst others.

Collaboration and decoupling
The approaches to peer review in this subcategory reflect a loosening of established roles and have been organised within a summary table (Table 2) to illustrate this, with the more marked changes presented last. The types of peer review move from increased collaboration and interaction between stakeholders (collaborative review) to reassignment of roles in organisational innovations (decoupled post-publication review). Collaborative review (Barroga, 2020) is the process where reviewers, editors and other contributors pool their comments to offer one set of consolidated recommendations for authors to address. Horbach & Halffman (2018) present a similar process which they name 'discussion during review'. In a step further, this same process takes place online so other people can follow the process and add their own comments. Barroga (2020) and Tennant (2018) suggest the additional participants are limited to 'other interested scientists' but it is unclear how this can be enforced given the public platform.
Moving away from increased interaction as a focus of innovation and the slight modification of traditional procedures, the next type of innovation in this category is cascading or transferring peer review. This innovation was found three times in this review: Barroga

Registered reports
Horbach & Halffman (2018) 'In this form of peer review, which is still restricted mainly to medical fields and psychology, manuscripts are usually reviewed in two stages. The initial and most important review stage takes place after the study has been designed, but prior to data collection.' Results free review Burley (2017) 'Results-free review means that in a first step the paper is evaluated only for its rationale and method, not the results. If the former is deemed suitable for publication, then this is offered in principle. In a second step, the results are reviewed too. Publication may only be rejected if the results deviate unjustifiably from the stated aims and methods.'

Specialised review
Horbach & Halffman (2018) 'Over the past decades, new actors have joined the review process, thereby compelling peer review itself to become more specialised. This applies to its content, for example introducing specialised statistical reviewers, as well as to the process ...'

Specialised review
Horbach & Halffman (2018) '... plagiarism detection software tool to assist in peer review, the CrossCheck system being the most common ... ' ' Automatic analysis that checks for the correct use of statistics in manuscripts using AI.'

Specialised review
Horbach & Halffman (2018) 'The assistance of software in detecting image manipulation, which is considered an increasing form of fraud in various research areas... it has already become possible to check for bad reporting, … data fabrication and image manipulation … usually done by … the editorial team or journal's staff.'

Specialised review
Bruce et al. (2016) ' Addition of peer reviewers for specific tasks or with specific expertise such as adding a statistical peer reviewer, whose main task is to detect the misuse of methods or misreporting of statistical analyses.' process whereby an article that has already been peer reviewed and rejected by one journal is given the opportunity to be considered by another journal within the same publishing company. Barroga  A key aspect of all of these is that they are journal agnostic, that is, the process of peer review is not directly linked to a particular journal's decision-making process in relation to the article. Recommendation is about promotion of particular articles that have been reviewed post-publication through a respected consortium of researchers such as F1000Prime, now Faculty Opinions (Thorburn, 2020). As defined by Barroga (2020) (after Tennant et al., (2017) portable peer review involves paying for an article to be reviewed and receiving the reports to submit to a publisher alongside the article. Independent peer review is again a commercial review company providing a service for an author, with the difference being that some publishers foot the bill for the review when a paper is subsequently published in their journal. De-coupled post-publication review is described by Barroga

Focussed and specialised review
This category captures types of peer review that focus on one aspect or section of a publication. Soundness only peer review (Horbach & Halffman, 2018) refers to a method of reviewing in which only the rigour of the research (as opposed to its novelty or significance) is considered in making a decision on acceptance. It is akin to the critical appraisal method often adopted in reviewing health research for example in using a Critical Appraisal Skills Programme checklist (CASP, 2018). The aim is to allow all results to be published which meet a particular quality threshold, not only the most interesting or novel results. 'Results free peer review' (Burley, 2017) refers to a method of screening papers in a two-stage review process. This involves evaluating the rationale for the study and the methods. In the case of a positive evaluation, the paper is approved for publication in principle subject to a further full review that also includes the results. Horbach & Halffman (2018) and Bruce et al. (2016) both report instances of specialised review where a paper is reviewed with a focus on one aspect. This includes plagiarism detection, use of statistics and use of images. This work is done by various actors including researchersand editorial or journal staff, or by utilising specialist tools such as CrossCheck, a publisher initiative using the iThenticatetext comparison software (Feinstein, 2008). AI tools may also be used for this kind of work. Table 2 summarises the different types of peer review found in the literature.

Reviewer incentives
This subcategory describes various incentives that are offered to induce researchers to act as peer reviewers. The incentives are manifest in direct and indirect rewards for peer review. Indirect rewards are about being a good academic citizen, taking part in peer review as a usual part of academic work. These rewards are well established. As examples of indirect rewards of a private nature, Barroga (2020) mentions being up to date with one's field and having the opportunity to influence the direction of the field. Other indirect rewards are of a public nature, such as being invited to be on an editorial board. Regardless of their private or public nature, indirect rewards do not bring an immediate benefit but instead help to promote one's reputation, gain experience and contribute to the wider research system.

Reviewer support
Innovations to support reviewers include standards, training and tools for reviewers. Barroga (2020) Table 3 summarises Table 3. Peer reviewer focussed initiatives.

Description from paper
Non-financial Barroga (2020) 'Nonfinancial incentives may come in the forms of frequent reviewer invitations, being up-to-date with research developments, opportunities to influence science, increased acumen in reviewing, free journal access or subscription, access to databases/research platforms and digital libraries, acknowledgment in journal websites, publicized reviews, letter of thanks, certificates of excellence, and editorial board appointment.' Crediting Barroga (2020) 'Crediting incentives may be given by formally recognising the reviewing work and linking peer review activity to ORCID records using DOIs.' Financial Barroga (2020) 'Financial incentives can be received through the Rubriq system by providing pre-publication reviews or from compensation derived from the article processing charge. Although cash incentives can hasten reviews, many journals cannot realistically afford it. Cash incentives may also affect the quality of review, transform the review process into business, or damage the moral sentiments of researchers. Other forms of financial incentives include waiver of publication charges and free access to paid articles.'

Reviewer credit
Tennant (2018) 'How to provide and receive appropriate credit for peer review is an ongoing debate … There is … currently a great potential scope of providing more detailed information about peer review quality, in a manner that is further tied to researcher reputation and certification. The main barrier that remains here is the fact that peer review is still largely a closed and secretive process, which inhibits the distribution of any form of credit.' Rewarding peer review Burley (2017) '... recognizing and rewarding peer reviewers has become a priority for scholarly societies, publishers, and service providers. For example, societies publish lists of the most prolific and helpful reviewers; publishers give public credit and provide additional rewards; and service providers enable the collection of data on reviews and reviewers to enhance reviewer visibility and rewards. Further still, Publons is a start-up dedicated to publicly recognizing reviewers for their contribution, enabling reviewers to track and showcase their activities. the different types of reviewer focussed initiatives found in the literature.

Technology to support peer review
AI support for peer review, research discovery tools and publishing platforms amongst other technologies feature amongst the innovations described so far in this review. However, Barroga (2020) (after Tennant et al., 2017) goes beyond the use of software tools and discusses possible future models of peer review based on particular types of technology. This section reports on these suggested future models. This is followed by Table 4 which summarises current use of technology and future models.
Barroga (2020) reviews the potential models of peer review put forward by Tennant et al. (2017) and assesses them against six features of open access publishing: openness, anonymity, accountability, bias, time and incentive. All the proposed models are open, in that review reports are public, but the identity of authors and reviewers remains unknown. On the factors of bias (whether editorial decisions are made public) and anonymity (whether the identity of editors and reviewers are revealed to authors) no assessment is made due to the models being hypothetical. The Reddit, Stack Exchange, and Hypothesis models are rated as offering greater author -reviewer accountability due to more transparent interactions between these stakeholders. Greater efficiency may be found in the GitHub and Wikipedia models with review time shortened or delays minimised. Reviewer incentives are found embedded within the Stack Exchange, block chain and hybrid peer review models.

Discussion and summary
As review articles, the studies in our review draw conclusions based on several items of primary evidence. By bringing together these conclusions in a simple synthesis, it is possible to reveal some key messages, given that any similar conclusions drawn in the various review articles have the combined weight of all the primary evidence reviewed. The conclusions of the review articles have been integrated below, to highlight observations on current practice, perspectives on the implications of new ways of working and calls for action. The strongest conclusions, based only on frequency, are the need for more research to determine the effectiveness of new models of peer review, and the need for a full reflection on the peer review system, including all stakeholders.

Observations on current practice
There are a number of observations summing up the current state of affairs in peer review: the increase in innovation has been rapid (Barroga, 2020), there has been an overall improvement in reviewer recognition and efficiency of peer review processes, and an increase in initiatives trialing transparent peer review (Burley, 2017). Technology is not being used to its full potential in the peer review system (Tennant, 2018). Moving to reviewer focussed innovations, two of the reviews covered highlight the use of reviewer training, Burley (2017) commenting that scholarly publishers and learned societies are increasing their focus on training (and rewarding) reviewers, and Barroga (2020) suggesting that reviewer training and core competencies are important to consider as part of a broader reflection on the peer review system as a whole.
Perspectives on the implications of newer practices A number of authors discuss their interpretations on the implications of newer practices: that quality may be compromised if reviewers are paid (Barroga, 2020); also, that innovations in peer review such as post-publication reviews and statistical reviews may reinforce a particular perspective in how scientific knowledge is perceived (Horbach & Halffman, 2018). Rather than research outputs being seen as a snapshot of discovery, capturing one moment in time, which will be built on with new research, these innovations can lead to publications being edited with the aim of arriving at a set of inviolable facts. The authors suggest this is a new perspective, favoured by those with a realist or positivist view of knowledge perceiving the research literature as a 'database of facts' rather than a 'library' (Horbach & Halffman, 2018, p. 13). Two reviews also cite specific barriers to change within the peer review system: that new models will never become mainstream whilst there is so much prestige to be gained from publishing in journals (Tennant, 2018 for the decoupling of peer review from commercial interests in order to return to a community-led process. Table 5 provides a summary of innovations described in this review.

Conclusion
This review of innovations in peer review is based on papers identified in Web of Science and Scopus, limited from 2010 to 2021. A total of 291 papers were screened, with six recent review articles being included. These review articles comprise a mixture of narrative reviews, meta-analysis, state of the art and summary articles. They describe numerous approaches to peer review. In our overview we collated these descriptions of peer review into four subcategories: open/masked, pre/post publication, collaboration and decoupling, focussed and specialised. We also collated mentions of reviewer focussed initiatives and presented these in the subcategories of reviewer support and reviewer incentives. We recorded and extracted references to the use of technology to aid peer review and summarised these practices noting current uses and potential models as reported in our included papers.
The fact that there are enough review articles to warrant a review of reviews, indicates the growing maturity of the field of peer review research. One review focussed on efficacy of peer review methods in a particular field (Bruce et al., 2016), and effectiveness evidence, testing and measuring how well particular innovations meet their objective continues to be a growing form of research in the field. However, given the size of the field and the inherent complexity of analysing the peer review system, which spans numerous disciplines and includes varied professions in its conduct, descriptive research in any form will always be essential to record the development in innovations. This literature review is a contribution in this vein. We hope that our overview of peer review innovations will support future work in this area.

Mario Malički
Stanford University, Stanford, USA Thank you for the updated version, but for the effort I invested in the initial review, I expected a point by point response to the raised comments. The naming/classification I see has remained as before without effort to align this with STM or other initiatives, the inclusion of a narrative summary, a paper without references remains questionable as any other research paper or opinion paper might have mentioned more "innovations" than that one, and future potential models remain included with too weak a rationale behind it as proposals listed in many papers on peer review could have fallen under the definition of potential future models.

STM Journals, Elsevier, Amsterdam, The Netherlands
Thank you for the opportunity of reviewing this insightful review. Woods et al. have analyzed the body of published 'peer-reviewed' papers on innovation in peer review in scholarly publishing and report their findings in this paper.
It is important to note that by default innovation entails failure. No results and negative results of innovative ideas organized by journal publishers and societies as well as funders might have a hard time ending in the body of published literature if ever tried for and as such, it is even more relevant to consider non-peer-reviewed articles for such studies.
Nevertheless, while the title and abstract mention innovations in scholarly publishing authors only discuss innovation in the process of reviewing journal manuscripts, and even there, only the external peer-review process through the lens of 4 previous reviews whose authors had the same focus. In this way, authors miss the opportunity to list innovations within the wider editorial process during the peer review, innovative ideas in books and monographs peer review, grant proposal peer review, and research elements such as data, code, protocol, software, and hardware peer review.
Also by limiting the search strategy to peer-reviewed and published in indexed journals, authors miss a few innovative ideas published by journals and publishers that do not have an incentive to publish a peer-reviewed paper about their initiative. These stakeholders usually announce their new initiatives on their web pages, or as blog posts. Below you will find a few examples.
Examples of some other innovations in peer review that have not been listed in this paper are: crowd review: to name a few of the reviewer-focused innovations There is also a missed opportunity of narrowing the focus only to innovations in the external peerreview process. There have been several innovative streams to improve the editorial side of the peer review process that help speed up the process and improve the quality of feedback. A good example is using AI for matching manuscripts with reviewer areas of expertise resulting in several reviewer recommender tools such as: https://www.springernature.com/gp/editors/resources-tools/reviewer-finder ○ https://www.elsevier.com/connect/editors-update/a-helping-hand-with-finding-reviewersintroducing-the-elsevier-reviewer-recommender ○ To summarize: I understand addressing the above critics means changing the methods and scope of this study which is not preferable. I suggest authors change their title and abstract to reflect the abovementioned items and add a limitation section to their paper listing these limitations.

Are the conclusions drawn appropriate in the context of the current research literature? Partly
This literature review -why then use the term meta-summary in the title ○ The aim of this literature review -please rephrase, and say what is the aim of this paper. Its relation and origin, and later use for the project can be described in the introduction/methods.

Introduction
I recommend a complete rewrite and expansion of the introduction, see comments on the abstract. What is the aim of this paper? The aims of the overall project, and how it came to be are secondary.

Methods
This review was undertaken -please use the same term everywhere, and move this sentence to the background. ○ I find that the whole first section -Overall approach should be deleted, and search of literature stated first. Statements like -, it was not necessary to identify -if the paragraph is not deleted I recommend it is changed to: We felt it was not -this is an issue or a limitation that a reviewer can oppose, and I would. -We did not limit the search -please first define, what you did search for and when, and provide a full search strategy. And then explain inclusion, and exclusion criteria, as well as bias assessment, or check to see how those papers matched your definition of peer review and innovations. If none were excluded, then state here that you included all papers that described any type of intervention/innovation. ○ Details of Reimagine Review are not necessary and should be deleted. Just cite it and say how many of their intervention made it to your final inventory.
○ Data collection -I would recommend using PRISMA 2020 subtitles, i.e. information sources, and that you use all applicable subtitles despite this not being sys. review You did use the Flow chart.

○
If as you say in the end review articles were only included, then perhaps umbrella review is the best term ○ Presentation of results -would recommend deleting this paragraph ○ I find the description of 6 reviews unnecessary and would move them to the appendix. It is also not clear from the methods, what is the difference between a summary article, state-ofthe art review, and narrative review -are these terms the authors used or you, how would you classify them, and why? ○ Open/masked peer review -You could consider using the STM taxonomy rather than open/masked -https://osf.io/68rnz/ ○ In pre and post publication commenting, this amounts to comments or feedback by any interested party, -this is not true, there are specialized preprint review platforms actively inviting specific individuals. What would you call the platform where your article was posted and is peer-reviewed by me -who was invited to review? ○ Registered reports or study protocols in my view should not be a sub-category under prepost pub commenting. They are calls for review of study designs/ideas, rather than completed studies. And they can happen either as a pre-or post, as open or masked, as collaborative, or as decoupling. As can grant peer review. This makes me also question the ○ term approach to peer review -what is approached here? In Burley's paper, she calls these models of peer review. How do other reviews call these innovations? Types, modes, approaches, descriptors? These approaches are descriptions of elements of peer review, which can one day be meta-data for peer review. A registered report is a type of article, not peer review.
Looking at the titles of these categories, I fear a lot has been missed by not using the STM approach for subtitles, i.e. I would rename your "approaches" to Review focused Innovations or Review process interventions, and would call the sub-categories: Reviewer identity (innovations), Review timing (innovations), Reviewer interaction (innovations), Content focus (innovations), ○ Technology support -rather than Technology to support peer review. -but see my comment 15 ○   As table 1 includes 576 included studies, I also hoped to see a table with these 576 studies classified by which innovations they cover. Was information based only on summaries in the 6 studies, without checking if something was missed by the original authors?

○
Reviewer focused initiatives +-should rather be Reviewer focused innovations. But I believe you need to mention that AI checks and software cheeks are also Editor/publisher focused and can also be given to authors to self-evaluate their manuscripts. Like language software, self-evaluation checklists, statcheck, and so on. Incentives can also be used to make editors do a better job. You also stated: that Barroga (2020) 2 cites informal training that researchers do for themselves -this is then an intervention on authors. So perhaps this category should be revised, and if the first high level is focused on the review process, then these here are perhaps focusing on review skill/expertise. ○ I find the weakest part of the summary to be the technical support, and the names assigned to them. GitHub and many other models are already captured in the "approaches section" they are rather platforms where a manuscript or peer review reports could be stored. And so there is nothing in the GitHub model that was not covered before. Amazon model refers to the rating of reviews -and this belongs to reviewer-focused initiatives not here. It should have been made clear when technology is used to enable other approaches, versus when it provides a new type of innovation. I find it also very strange that AI approach was listed primarily under focused review. Why does it matter who performs it? AI review can be preor post, it can be open or masked (in theory, if the type of AI used is not disclosed), it can be focused or for all. Why isn't AI also mentioned under reviewer support, they can save time for reviewers to check many things. Finally, why use Baraga's models and summarize them here. Can they be called an innovation, if they were never attempted in peer review-e.g. GitHub model, and amazon model. They are theories/models. You stated "what makes an innovation is if the practice is new to those who are implementing it. -if it was not implemented at least once, it should not have been included. It is a theory/model. What is it in this model that is not an innovation captured elsewhere? Just the use of a different term? ○ My last comment is not directly about this paper, but more on the 4 schools described in How to improve scientific peer review: Four schools of thoughthttps://osf.io/preprints/socarxiv/v8ghj/. The categorization of high-level innovations here, ○ and the schools showcase quite a different approach to classifications, and maybe both need to be revised in light of one another and my comments here. I will refrain from commenting on the discussion as I believe my above comments are already extensive and ask for a lot of changes, that should then also be reflected by rewriting the discussion.
In hopes that my comments may help you improve your manuscript, Mario Malički