Published at MetaROR

November 20, 2024

Table of contents

Available versions
Cite this article as:

Henriques, S. O., Rzayeva, N., Pinfield, S., & Waltman, L. (2023, October 13). Preprint review services: Disrupting the scholarly communication landscape?. https://doi.org/10.31235/osf.io/8c6xm

Curated

Article

Preprint review services: Disrupting the scholarly communication landscape

Susana Oliveira Henriques1,2,3 Email ORCID, Narmin Rzayeva1,2,4 Email ORCID, Stephen Pinfield1,5 Email ORCID, Ludo Waltman1,2 Email ORCID

1. Research on Research Institute (RoRI)
2. Centre for Science and Technology Studies (CWTS), Leiden University
3. Central Library, Lisbon University Medical School
4. Azerbaijan University of Architecture and Construction
5. Information School, University of Sheffield

Originally published on August 11, 2024 at: 

Abstract

Preprinting has gained considerable momentum, and in some fields it has turned into a well-established way to share new scientific findings. The possibility to organise quality control and peer review for preprints is also increasingly highlighted, leading to the development of preprint review services. We report a descriptive study of preprint review services with the aim of developing a systematic understanding of the main characteristics of these services, evaluating how they manage preprint review, and positioning them in the broader scholarly communication landscape. Our study shows that preprint review services have the potential to turn peer review into a more transparent and rewarding experience and to improve publishing and peer review workflows. We are witnessing the growth of a mixed system in which preprint servers, preprint review services and journals operate mostly in complementary ways. In the longer term, however, preprint review services may disrupt the scholarly communication landscape in a more radical way.

Introduction

Preprints are well-established in some fields but not in all. Preprint servers make scientific work available rapidly (albeit usually in a form prior to peer review) and also openly, enabling scientific work to be accessed in a timely way not only by scientists but also by policymakers, journalists and others. The COVID-19 pandemic led to an unprecedented rise in the use of preprints by the biomedical research community. Preprints were an essential part of the communication of research about COVID-19, useful in particular as a way of accelerating communication of research results. Another potential benefit of preprints is that they allow authors to receive and incorporate feedback from the wider community prior to journal publication (Fraser et al., 2021). Nevertheless, due to the non-peer-reviewed nature of preprints, concerns about the lack of quality assurance remain (Blatch-Jones et al., 2023; Ni & Waltman, 2023). Partly in response to these concerns, there are a growing number of services that facilitate evaluative peer feedback (e.g., comments, recommendations, reviews) on preprints. Some of these services may even be seen as alternatives to journal-based peer review, pointing towards possible future approaches to overcoming weaknesses of the journal-based peer review system.

In Figure 1, we summarise some of the potential benefits of services facilitating the review of preprints, as they are commonly presented in sources such as the websites of preprint review services, blog posts and scientific articles. In this paper, we will explore how preprint review services are trying to achieve these benefits and how they may add value to the scholarly communication system, pointing out to a cultural shift in peer review (Avissar-Whiting et al., 2023).

Figure 1. Potential benefits of services facilitating the review of preprints. *By users, we mean scientists, policymakers, journalists, and citizens in general.

Our aim is to develop a systematic understanding of the main characteristics of preprint review services in order to evaluate how these services manage preprint review and to position these services in the wider scholarly communication and peer review environment, including journal publishing. By a preprint review service, we mean a journal-independent peer review service for articles posted on a preprint server, where the peer review process is publicly visible. We report here a descriptive study based on a qualitative analysis of data available in the ReimagineReview registry and other online sources, such as the websites of preprint review services, blog posts and scientific articles. This paper is complementary to a study recently reported by Lutz et al., (2023) of an ongoing survey of Alternative Publishing Platforms. By providing additional information on preprint review services, we are contributing to the work of gaining a better understanding of the landscape formed by these platforms and how they can be placed in the open scholarly communication ecosystem.

We have opted not to use the term ‘peer’ when discussing platforms for the review of preprints, adopting instead the designation ‘preprint review services’, distinguishing them from ‘journal-based peer review’. This better reflects the diversity of review approaches facilitated by preprint review services — from the spontaneous posting of feedback by individuals to community-driven review platforms — and considering that feedback on preprints is not bound by the conventions often associated with journal-based peer review — in fact, a variety of forms of preprint feedback are evident, ranging from minimal ‘ratings’ and informal approaches to in-depth formal peer review (Avissar-Whiting et al., 2023).

In our work, we use as an analytical lens a framework recently proposed by some of us (Waltman et al., 2023) in which a distinction is made between four schools of thought on peer review. The four schools, referred to as the Quality & Reproducibility school, the Democracy & Transparency school, the Equity & Inclusion school, and the Efficiency & Incentives school, provide a framework for organising innovations in the peer review space, grouping them in terms of how they perceive problems of peer review and develop solutions in response to these problems. We discuss these in detail in the analysis below. We suggest that the four schools framework offers a useful way to better understand the complexity of innovation in preprint review. Based on our analysis, we reflect on possible future models of peer review and scholarly communication that may arise from the experimentation we are currently witnessing.

Following a description of our methods in Section 2, Section 3 provides an overview of services facilitating the review of preprints. Section 4 discusses the contribution of preprint review services in addressing the various issues raised by the four schools of thought on peer review. Section 5 explores how preprint review services are managing apparent tensions between different aims, namely the ones resulting from the anonymisation of reviewers and authors. Section 6 discusses how preprint review services fit into the publishing landscape and how they may influence the development of this landscape, and Section 7 presents some concluding remarks.

Methods

We carried out a descriptive study using qualitative analysis. We used ReimagineReview (RR) as a starting point to identify services and initiatives to facilitate the review of preprints. RR is a registry of services and experiments around peer review of scientific outputs created by ASAPbio (n.d.). For additional information, we supplemented the directory data with desk research using online sources such as the websites of preprint review services, blog posts and scientific articles. Using ATLAS.ti Windows (version 23.0.8.0), we carried out a qualitative analysis of text from the relevant sites, assigning codes covering what they do and why they have chosen to do it that way. Where there was ambiguity in project descriptions, we contacted the service managers to clarify and confirm some details.

As of April 2022, the RR directory listed 35 services for preprint review. For our study, we considered only the services where the process of reviewing preprints was visible. ReimagineReview also includes services that do not publish reviews. We did not consider these services. We finalised the selection and analysis of services for our study in November 2022. The web sites of two preprint review services included in our initial set became unavailable after November 2022. These are the CrowdPeer and the Red Team Marked. As CrowdPeer became unavailable before we finished the analysis, we could not check and describe its main characteristics, as we did for the other services, including the Red Team Marked. Therefore, we decided not to include CrowdPeer in our final set of 23 services for review of preprints. Nevertheless, based on our initial analysis and the information available in the RR directory, we have included some comments on CrowdPeer as an illustration of possible future models of peer review and scholarly communication, where relevant. Red Team Marked was taken out of operation later and so we were able to include it in our analysis. On the other hand, some services have evolved and for these services our analyses, as a snapshot, may not be fully up-to-date. This is the case for Rapid Reviews: COVID-19, which has expanded beyond COVID-19. And so, Rapid Reviews: COVID-19 is now Rapid Reviews\Infectious Diseases, an open-access overlay journal that accelerates peer review of important infectious disease-related research preprints (RR\C19, n.d.).

To provide an overview of the 23 services included in our final set, we first described their main characteristics according to six categories: 1) project name, 2) developed by, 3) scientific disciplines, 4) goals and intentions, 5) peer review approach, and 6) approach to transparency. Under “developed by”, we used three categories: publisher, university or individual researcher or group of researchers. When detailed information was not available online or was available but not sufficient to support our classification, we contacted the service managers, asking them to tell us how they would describe who the project is led by. We used the “other” category for those who did not answer us and where ambiguity remained.

For the definition of scientific disciplines, we grouped all disciplines identified among our set into five categories considering the usual organisation of the scholarly communication landscape: Physical Sciences, Biomedical and Life Sciences, Engineering and Technology, Social Sciences, and Humanities. We considered the All Disciplines category for services hosting any scientific discipline or having a multidisciplinary approach without a clear definition of the disciplines or sub-disciplines included.

To discuss how preprint review services fit into the publishing landscape and how they identify possible future models of peer review and scholarly communication, we then analysed the stated goals, review approach and transparency, using previous studies as a baseline to discuss and validate our findings, addressing the following questions:

  • How do these services differ from, and what do they have in common with traditional peer review? How are they linked with the four peer review schools of thought recently proposed by some of us (Waltman et al., 2023), and how do they manage connections between schools when adopting ideas from multiple schools?

  • How can preprint review supplement journal-based peer review or offer an alternative for journal-based peer review?

Our use of the four schools of thought framework enabled us to explore how innovation and experimentation promoted by preprint review services can address the journal-based peer review challenges identified by Waltman et al. (2023) and provide possible solutions and future directions. We assessed the contribution of preprint review services against the key issues identified in the four schools of thought framework, considering the stated goals and intentions of each service, their approach to peer review, and their commitment to openness and transparency. As Waltman et al. (2023) state, these schools have different views on the key problems of the peer review system and the innovations needed to address these problems. While the schools may at times complement each other, the differences in their views may also give rise to tensions. Based on this idea, we aim to understand how preprint review services are able to manage possible tensions when bringing together the key issues of different schools.

Finally, we identified possible future models of peer review and scholarly communication that may arise from the experimentation we are currently witnessing.

  • Overview of preprint review services

Our final set, presented in Table 1, consists of 23 services for review of preprints – some developed by newly-established groups, others by pre-existing organisations, such as publishers or universities. Some of them are focused only on reviewing preprints (7 services), others review preprints and journal articles (14) or privately shared manuscripts (5) or other types of scholarly outputs (11). The most represented category of developers is individual researcher or group of researchers (15 services), followed by others (5), publishers (2) and universities (1). The most represented scientific disciplines are all disciplines (12 services) and biomedical and life sciences (8), followed by physical sciences (3), social sciences (3), engineering and technology (2), and humanities (1).

Service name 

Developed by 

Scientific disciplines 

Types of outputs 

Crowd preprint review

Individual researcher or group of researchers

Biomedical and Life Sciences

Preprints

Episciences

Other

All disciplines

Preprints

Journal articles

Other scholarly outputs

Faculty Opinions

Individual researcher or group of researchers

Biomedical and Life Sciences

Journal articles

Other scholarly outputs

Preprints

Hypothesis

Individual researcher or group of researchers

All disciplines

Journal articles

Other scholarly outputs

Preprints

Privately shared manuscripts

Interactive Public Peer Review

Publisher

All disciplines

Journal articles

Preprints

Peer Community in

Individual researcher or group of researchers

All disciplines

Preprints

Privately shared manuscripts

Peer Library

Individual researcher or group of researchers

All disciplines

Journal articles

Other scholarly outputs

Preprints

Privately shared manuscripts

Peeriodicals

Other

All disciplines

Journal articles

Other scholarly outputs

Preprints

PeerRef

Individual researcher or group of researchers

All disciplines

Preprints

Plaudit

Individual researcher or group of researchers

All disciplines

Journal articles

Other scholarly outputs

Preprints

preLights

Individual researcher or group of researchers

Biomedical and Life Sciences

Preprints

PREreview

Individual researcher or group of researchers

All disciplines

Preprints

PubPeer

Other

All disciplines

Journal articles

Preprints

Qeios

Individual researcher or group of researchers

All disciplines

Journal articles

Other scholarly outputs

Preprints

Rapid Reviews: COVID-19

Publisher

Biomedical and Life Sciences Social Sciences

Journal articles

Preprints

Red Team Market

Individual researcher or group of researchers

Social Sciences

Journal articles

Other scholarly outputs

Preprints

Privately shared manuscripts

Researchers.One

Individual researcher or group of researchers

All disciplines

Other scholarly outputs

Preprints

Privately shared manuscripts

Review Commons

Individual researcher or group of researchers

Biomedical and Life Sciences

Preprints

Scibase

Other

Physical Sciences;

Biomedical and Life Sciences

Journal articles

Other scholarly outputs

Preprints

Science Open Reviewed

University

Biomedical and Life Sciences

Engineering and Technology Social Sciences

Humanities

Journal articles

Preprints

SciPost

Other

Physical Sciences

Journal articles

Other scholarly outputs

Preprints

SciRate

Individual researcher or group of researchers

Physical Sciences

Engineering and Technology

Preprints

Sinai Immunology Review Project

Individual researcher or group of researchers

Biomedical and Life Sciences

Preprints

Table 1: Overview of the services for review of preprints

Table 2 lists the different approaches to peer review transparency and openness taken by the preprint review services. Offering alternative publication options or alternative approaches to quality assessment compared with the traditional journal-based model is a common goal for all services. One of the key motivations in encouraging review of preprints is to enable authors to benefit from having their work reviewed and improved before submission to a journal. This is often seen as having the potential benefit of reducing rounds of journal re-review. Some services also often give credit to reviewers as a way of incentivising participation. Review of preprints is often explicitly said to address the need for quality control of preprints, reinforcing confidence in the use of preprints and making the evaluation of scientific work more efficient and more open and transparent. Transparency is an evident priority in services that implement one or a number of different aspects of open peer review identified by Ross-Hellauer (2017) including “open reports” (where reviewer reports and author responses are published), “open identities” (where the identities of authors and reviewers are known to each other), “open interaction” (direct unmediated exchange between authors and reviewers is enabled), “open participation” (where reviewers can self-nominate and add comments without being invited to do so), amongst others. In some cases, like in Review Commons, transparency applies only to review reports and not the reviewer’s identity. In others, like in SciPost and Peer Community In, reviewers have the option to withhold their identity, becoming visible only to editors or editorial staff. There are two services, Peer Community In and Hypothesis, that support double-anonymous peer review, in which not only reviewers but also authors can withhold their identity.

The services promote new communication patterns regarding quality assessment and reviewer selection (Table 2). Public post-publication reviews, recommendations, comments, free-form commenting or quantitative scores are among the different assessment approaches we identified. When considering the seven services that are

restricted to reviewing preprints, the most frequent quality assessment approach is the structured review form (where reviews are input using standardised structured headings). This approach is used by 4 services: PREreview, PeerRef, Review Commons and Sinai Immunology Review Project. In respect of the reviewer’s selection, apart from 2 services (Peeriodicals and Peer Community In), selection of reviewers does not depend on the editor’s decision only. We identified 7 services that allow reviewers to be selected by an editor, the editorial staff, or the community and 2 services where reviewers are selected by an automated tool. There are 12 services where reviewers can be self-nominated. Although the use of an automated tool constitutes an innovative approach to the selection of reviewers, it is not limited to preprint review services. When compared to journal-based peer review, the self-nomination of reviewers is an important innovative aspect of preprint review services, enabling reviewers to get involved in an ostensibly more participative and interactive review process. Self-nomination means that basically anyone can serve as a reviewer on a platform, without intervention from an editor, editorial staff or anyone else. In most cases, reviewers just have to be registered on the platform to make a comment or recommendation, or to endorse a preprint based on a pick and choose format (e.g., the Plaudit endorsement tool).

Some initiatives offer services that go beyond the review process itself. PreLights and PREreview, for instance, invest in reviewer training for early-career researchers and in other initiatives focused on promoting and recognising reviewers.  Prelights provides assessments of reviewers (peer review of peer reviews), or recognition for reviewers’ contributions. PREreview organises training workshops centred on issues of equity, diversity, and inclusion.  Another focus of some services is reproducibility, a key issue relating to the quality of scientific outputs. The aim of overcoming or mitigating the lack of quality control associated with preprints and of addressing issues of reproducibility is visible in the policies and procedures of some of the services. This is the case for Queios, Scibase and Peer Community In, which request authors to make code and data openly available and then ask reviewers to consider the code and data in their review.

Service 

Reviewer selected by 

Peer review format 

Reviewer identity 

Reviewer identity known to 

Competing interests 

Transparency 

Crowd preprint review

Self-nominated

Annotations

Non-anonymous

Editor or editorial staff

Not included

Open identities

Open interaction

Open reports

Single anonymous

Episciences

Editor, editorial staff

Free-form commenting

Quantitative scores

Structured review form

Anonymous

Non-anonymous

Editor or editorial staff or public

Checked

Open interaction

Single anonymous

Open identities

Open reports

Faculty Opinions

Editor, editorial staff or community

Annotations

Free-form commenting

Quantitative scores

Structured review form

Non-anonymous

Public

Checked

Open identities

Open reports

Hypothesis

Self-nominated

Social annotation

Free-form commenting

Structured review form

Anonymous

Non-anonymous

Editorial or editorial staff

Not included

Double anonymous

Open identities

Open interaction

Open participation

Open reports

Single anonymous

Interactive Public Peer Review

Editor, editorial staff or community

Structured review form

Free-form commenting

Anonymous

Non-anonymous

Editor or editorial staff

Checked

Open interaction

Open participation

Open reports

Peer Community in

Editor

Free-form commenting

Anonymous

Non-anonymous

Editor or editorial staff

Checked

Double anonymous

Open identities

Open reports

Single anonymous

Peer Library

Self-nominated

Annotations

Non-anonymous

Public

Not include

Open identities

Open interaction

Open participation

Open reports

Peeriodicals

Editor

Free-form commenting

Structured review form

Anonymous

Non-anonymous

n.a.*

n.a.*

Open interaction

PeeRef

Automated tool

Structured review form

Non-anonymous

Public

Displayed

Open identities

Open interaction

Open report

Plaudit

Self-nominated

Annotations

Quantitative scores

Structured review form

Non-anonymous

Public

Not included

Open identities

Open interaction

Open participation

Open reports

preLights

Self-nominated

Summary

Non-anonymous

Public

Not include

Open identities

Reasons for selection

Significance and relevance

Free-form commenting

PREreview

Self-nominated

Free-form commenting

Structured review form

Score

Anonymous

Editor or editorial staff

Displayed

Open identities

Open interaction

Open reports

Single anonymous

PubPeer

Self-nominated

Free-form commenting

Anonymous

Non-anonymous

None

Public

Not included

Open identities

Open interaction

Open participation

Open reports

Single anonymous

Qeios

Automated tool or community

Structured review form

Quantitative scores

Free-form commenting

Non-anonymous

Public

Displayed

Open identities

Open interaction

Open participation

Open reports

Rapid Reviews: COVID-19

Editor, editorial staff or community

Structured review form

Strength of evidence scale

Anonymous

Non-anonymous

Editor or editorial staff

Checked

Open identities

Open reports

Red Team Marked

Editor, editorial staff or community

Annotations

Free-form commenting

Structured review form

Anonymous

Non-anonymous

Editor or editorial staff

Not included

Open identities

Open interaction

Open reports

Single anonymous

Researchers.One

Self-nominated

Free-form commenting

Anonymous

Non-anonymous

Editor or editorial staff

Not included

Open interaction

Open participation

Open reports

Review Commons

Editor, editorial staff or community

Structured review form

Anonymous

Non-anonymous

Editor or editorial staff

Checked

Open reports

Scibase

Self-nominated

Free-form commenting

Structured review form

Scale or rating

Anonymous

Non-anonymous

Public

Not included

Open participation

Science Open Reviewed

Authors, editor, editorial staff or community

Review report

Free-form commenting

Anonymous

Non-anonymous

Public

Checked

Open identities

Scipost

Editor, editorial staff or community,

self-nominated

Free-form commenting

Structured review form

Anonymous

Editor or editorial staff

Checked

Open participation

Open reports

SciRate

Self-nominated

Free-form commenting

Quantitative scores

Anonymous

Non-anonymous

Public

Checked

Open identities

Open interaction

Open participation

Sinai Immunology Review

Project

Self-nominated

Structured review form

Summary and grade

Non-anonymous

Editor or editorial staff

Public

Not included

Open identities

Open reports

*Information not available or not identified 

Table 2: Different approaches to peer review transparency and openness

Preprint review services through the lens of four peer review schools of thought

We use the four peer review schools of thought recently proposed by some of us (Waltman et al., 2023) as a framework to discuss the developments around preprint review services. Waltman et al. (2023) proposed four schools of thought on innovation in peer review, each of which has a different emphasis on what the key problems are within current peer review systems and what the priorities should be for improving these systems. Below, we present the focus of each school of thought, as previously described by Waltman et al. (2023):

  • The Democracy & Transparency school focuses on making the evaluation of scientific research more democratic and transparent. Concerned that peer review systems are often elitist and opaque, this school advocates broader participation in the review process to increase the accountability of editors and peer reviewers and enable information produced in peer review to be reused by others.

  • The Quality & Reproducibility school focuses on the role of peer review in evaluating and improving the quality and reproducibility of scientific research. Based on concerns about inconsistent quality associated with peer review processes, this school is interested in innovations in peer review that improve the quality of review reports and published research. Another focal issue for this school is safeguarding research integrity and identifying scientific misconduct.

  • The Equity & Inclusion school focuses on making peer review processes more equitable and inclusive. This school is given impetus particularly by concerns about bias in peer review systems. It emphasises the need for a balanced representation of different groups of researchers in the peer review system to reduce or eliminate biases related to gender, geography, race, ethnicity, etc.

  • The Efficiency & Incentives school focuses on improving the efficiency of peer review processes and the incentives for peer reviewers. This school is concerned about the pressure on the peer review system, which makes it increasingly difficult to find peer reviewers, emphasising the need to increase the efficiency of peer review and to better incentivise reviewers.

We assessed the characteristics of the 23 preprint review services against the key issues raised by the four schools of thought, considering the stated goals and intentions of each service, their approach to peer review, and their commitment to openness and transparency. Based on those criteria, we identified how each service could be associated with each of the four schools of thought. This gives us insight into key issues, such as where the priorities of the different services lie, how they are managing apparent tensions between different aims, and to what extent they are delivering on their objectives. These designations are discussed in more detail in the following sub-sections.

Making peer review more democratic and transparent

Calls for more transparent and open peer review approaches are a common feature of discussions about peer review. Recently, Kelly-Ann Allen et al. (2022) presented the lack of transparency as one of the three main challenges of peer review, together with the exploitative nature and the slowness of peer review. Based on a wide-ranging conversation on X (the social-media platform previously known as Twitter), they identified the need to open the “black box” of the current peer review system as the most constructive way to “fix peer review”. According to Fernandez-Llimos (2023) to minimise the peer review crisis, we should start recognising peer reviewers by acknowledging their contribution in every single paper they reviewed, and we can do this most effectively in open peer reviews.

By assuming a commitment to openness and transparency, preprint review services are already arguably trying to “fix peer review”, challenging authors, reviewers, editors, and publishers to rethink and adjust their practices. Openness and transparency are common goals for all services included in our study, meaning that they all support the ideas of the Democracy & Transparency school. All services in our study provide open review reports and promote open interactions among authors, reviewers and users. Still, Table 2 presents several variations in the peer review approach, selection of reviewers, management of their identities, transparency and openness. It is evident that transparency and openness have different meanings and are applied differently by different services. Managing reviewers’ identities is perhaps one of the biggest challenges the services face. As highlighted by Chloros et al., (2023) open peer review involving open identities enables better recognition of the contribution reviewers make. However, for many researchers, revealing their identity may raise concerns, like the fear of retaliation, and this may result in them providing less critical feedback. In Section 5, we discuss how preprint review services deal with this tension and how they balance transparency and openness with the interests and concerns of reviewers.

Improving quality and reproducibility

The Quality & Reproducibility school emphasises the crucial role of peer review in improving and certifying the quality of scientific work. A focus on quality is evident in preprint review services that train reviewers, evaluate reviewers’ work, check for competing interests, and pay attention to ethics and integrity. As can be seen in Table 2, in addition to unstructured reports, preprint review services also use structured review forms, quantitative scores, and scales or ratings, showing the broad range of approaches taken by preprint review services to assess and improve the quality of scientific work. Many preprint review services have adopted ideas of the Quality & Reproducibility school, thereby showing their commitment to rigorous preprint review and positioning themselves as promoters of high-quality and reproducible research. Faculty Opinions for instance states that the “caliber” of its reviewers and the “rigour” of the validation they provide ensure the quality of the research recommended by them. At Peer Community In, reviewers evaluate preprints in their thematic fields based on “rigorous peer review” and “conflicts of interests are carefully checked at each step of the process”. Plaudit considers that “publisher-independent endorsements” provided by “known and trusted academic community members provide credibility for valuable research”.

High-quality standards in peer review are often highlighted as essential for improving research quality and reproducibility (e.g., Chambers & Tzavella, 2022).  As the attention given to the lack of reproducibility in research grows (e.g., Brendel, 2021; Errington et al., 2021),  improving the reproducibility of research is increasingly seen as one of the responsibilities of peer review. This is reflected in the goals and intentions of some preprint review services and their policies and procedures. Examples of services for which this is the case are Scibase, Qeios and Peer Community In.

SciBase states that “science has a reproducibility problem” and argues that “only through honest public discussion can post-publication review become part of the scientific process, meaning that the reviewer’s identity is known to the public.” Reproducibility is one of the dimensions evaluated in SciBase’s review approach, which combines a report and an approval status (scale or rating). The overall rating provided by a review is a weighted average of the ratings of each of the individual dimensions, with the reproducibility dimension having the highest weight (25%). The other dimensions are logic/design, impact, transparency, clarity and versatility (15% each). SciBase also enables users to rate reviews themselves on a 1-5-star scale, with the intention of promoting transparency, rigour, accountability and reviewer recognition.

With the aim of improving quality and reproducibility, some services, like Qeios and Peer Community In, require authors to make all study data, digital materials, and computer code publicly available at the time of submission to the maximum extent permissible by the relevant legal or ethical restrictions. Scripts, simulation code, software parameters, etc., are prerequisites for submission. Also, reviewers are asked to check that authors provide sufficient details for the methods and analysis to allow replication by other researchers, considering statistical analyses and consistency of raw data and scripts, among other critical issues. As Peer Community In states, with these prerequisites, it aims to “establish a high-quality, free, public system for identifying high-quality preprints, after rigorous peer-review.” In addition to these examples, other services like Hypothesis and Peer Library make guidelines, recommendations or other support documents available to reviewers.

Making peer review processes more equitable and inclusive

Journal-based peer review is often perceived to be susceptible to disparities, inequities, and bias (e.g., Bancroft et al., 2022). The need to make peer review processes more equitable and inclusive is the focus of the Equity & Inclusion school. Supporters of this school often emphasise the need for a balanced representation of different groups of researchers in the peer review system to avoid biases related to gender, geography, race, ethnicity, etc.

An increasing amount of attention is paid to underrepresented groups in publication and peer review processes, aiming to contribute to diversity, equity, and inclusion (Royal Society of Chemistry, n.d.). Increasing diversity amongst editors, reviewers and authors is starting to be reflected in some editorial policies and practices. A key aim is to address biases in peer review against underrepresented groups. Concerns around peer review bias highlight the complexity of social interactions among actors involved in peer review and raise questions about the nature of the various forms of bias discussed in the literature (Lee et al., 2013).

An important example of a preprint review service explicitly addressing inequities and biases, with special attention to reviewers, is PREreview. By providing opportunities for traditionally marginalised research communities to get involved, train, connect, and be recognised for their contributions to scholarship, PREreview aims “to bring more equity and transparency to scholarly peer review.” At the same time, with the stated aim of helping researchers think deeply about how assumptions or biases may affect their assessment of manuscripts, PREreview developed the Bias Reflection Guide (Foster et al., 2021).  This tool reflects the perspective of the Equity & Inclusion school. This guide is provided to help students and researchers to make their approach to peer review more inclusive and less biased.

PREreview enables reviewers to publish reviews with their public names (connected to ORCIDs) or pseudonyms provided by the platform. This is intended to allow reviewers to critique the work they review without fearing negative consequences, particularly if the author is more senior than the reviewer. In this way, ‘PREreviewers’, particularly those belonging to vulnerable communities, may “contribute to open preprint review without fearing retribution or judgment that may occur when their full name is attached to the feedback, whilst retaining an element of accountability”. From the point of view of reviewers, protecting their identities can be expected to reduce the likelihood of bias based on factors such as gender, ethnicity, or institutional affiliation. Arguably, it protects reviewers against retaliation and encourages rigorous and honest feedback. Preprint review services take several different approaches to handle issues related to the identity of reviewers. Even for services that invite reviewers to embrace “full transparency (i.e., critics’ names and their criticisms are made publicly available)”, as Read Team Marked does, revealing one’s identity is not always mandatory (Table 2). Reviewers’ identities may be protected if they wish. Some services enable reviewers to choose whether they want to sign their review reports, others protect the identity of reviewers with a nickname or a pseudonym. Another approach is taken by Crowd preprint reviews, which aggregates the comments of different reviewers in a final synthesised review that is “publicly posted, without linking specific comments to the commenter.” Rapid Reviews: COVID-19 (RR:C19) emphasises global diversity. It works with a “global team, including board members and reviewers from all regions”, and reviewers can be anonymised upon request.

Another approach promoted by the Equity & Inclusion school is double-anonymous peer review, in which not only reviewers but also authors are anonymous. This approach is hard to implement in the context of preprint review. Nevertheless, some services make an effort to facilitate double-anonymous peer review in special situations. We will discuss this in more detail in Section 5.2.

Improving peer review efficiency and reviewer incentives

Peer review is often seen as overloaded (e.g., Nguyen et al., 2015; Okuzaki et al., 2019), and there is a lot of discussion about faster and more efficient approaches to peer review. Reducing inefficiencies in peer review constitutes a big challenge for journals, where the expectation typically is that a separate review process is carried out for each submission to each journal, including submissions previously rejected by other journals. By providing journal-independent portable review, preprint review services may reduce re-reviewing at multiple journals and increase the efficiency of peer review. On the other hand, depending on the extent to which journals are willing to adjust their processes and consider re-using preprint reviews, we might ask whether preprint review initiatives are, in fact, adding to an already overburdened system by creating even more work? In Section 6, we address these questions in more detail.

With regard to portable peer review, ten years ago, Swoger, (2013) argued that rejection takes time and energy from authors, reviewers and editors, and that portable peer review reduces re-reviewing at multiple journals, increasing the efficiency of the process. Swoger pointed out the role that new “portable peer review” services, independent of specific journals, could play in cutting down on redundant work by separating the review process from the publication process.

This illustrates one of the key issues emphasised by the Efficiency & Incentives school, which focuses on improving the efficiency of peer review processes and the incentives for peer reviewers. This school is concerned about the pressure on the peer review system, in which it is increasingly difficult to find peer reviewers. Approaches to make review processes more efficient and reduce re-reviewing at multiple journals are facilitated by various preprint review services, like Rapid Reviews: COVID-19, PeerRef and Review Commons, together with initiatives to incentivise researchers by making their work more visible and recognised. Nevertheless, according to the experience reported by the Rapid Reviews: COVID-19 service, so far, there is no hard evidence that journals would want to take into account their reviews. Despite the effort made to balance rapidity and rigour and making peer review more efficient, not all journals are receptive to establishing formal relationships with preprint review services, to have access to a feed of reviews that could supplement their review process.

Preprint peer review services may also address problems arising from the “reputation economy” for reviewers (Swoger, 2013). CrowdPeer, for instance, allowed reviewers “to build their reputations within the community and benefit professionally.” By implementing a reviewer evaluation system, in which the quality of a reviewer’s work is evaluated, CrowdPeer brought together key issues from the Democracy & Transparency school (accountability) and the Efficiency & Incentives school (recognition). Another example is preLights. By providing a “platform where early-career researchers can practise their scientific writing and reviewing skills,” this service aims to raise the profiles of early-career researchers as “trusted preprint selectors and commentators.” preLights provides recommendation letters to support early-career researchers.

The attribution of a DOI (Digital Object Identifier) to public review reports makes the reviewer’s work fully citable and claimable as a scientific output, and promotes recognition of reviewers. This is another way to help reviewers to gain recognition for their efforts. It is implemented by several services, like Rapid Reviews: COVID-19, Interactive Public Peer Review, and preLights. In addition, Rapid Reviews: COVID-19 and Plaudit link researchers, identified by their ORCID, to the research they have evaluated, identified by a DOI.

For Science Open Reviewed, a “paid service [for reviewers] combined with published reviewer acknowledgement” may promote higher quality reviews, provide “fair and motivating compensation” for professional peer review services and “minimise reviewer bias and promote greater reviewer accountability.”

Managing tensions between schools of thought: Anonymisation of reviewers and authors

In some cases, key priorities of different peer review schools of thought can be achieved alongside each other by the same preprint review service. For example, the Efficiency & Incentives school and the Democracy & Transparency school seem to complement each other in various ways. It may therefore be relatively easy for a preprint review service to embrace the ideas of both of these schools. This is, for instance, illustrated by CrowdPeer, although interestingly, this service is no longer operational. By providing “open review of preprints and engagement of a diverse group of reviewers,” it aligned with the Democracy & Transparency school, making the evaluation of research more democratic and transparent. At the same time, CrowdPeer also aimed to improve the efficiency of review processes by providing “a universal structure for reviews to standardise the review process” and providing an “educational tool for reviewers.” In that way, CrowdPeer aimed to reduce the pressure on the review system, in line with the Efficiency & Incentives school.

In other cases, however, there are tensions between different peer review schools of thought, reflecting different perspectives on how peer review can best be organised. As we discuss in the next sub-sections, managing the identities of reviewers and authors is one of the biggest challenges preprint review services faces.

Anonymisation of reviewers

The Equity & Inclusion school promotes anonymity of reviewers, as we have seen. It sees anonymity as a way to protect reviewers against possible negative consequences of critical reviews. Especially researchers who are in an early career stage or who otherwise find themselves in a vulnerable position may fear such negative consequences. Anonymity makes it easier for these researchers to perform peer review and therefore arguably contributes to more inclusive approaches to peer review.

The Quality & Reproducibility school may also be supportive of anonymity of reviewers, based on the rationale that anonymity enables reviewers to be more frank and therefore results in higher-quality peer review. However, anonymity of reviewers is in tension with the ideas of the Democracy & Transparency school. This school prefers to organise peer review as an open dialogue in which authors and reviewers participate on an equal basis, with each party accountable for their contributions. Organising peer review in this way is difficult or even impossible when reviewers are anonymous.

As shown in Table 2, with respect to transparency and openness, preprint review services provide different options. To balance transparency and protect reviewers’ interests, some services (13) enable reviewers to decide whether to sign their reviews, becoming anonymous or non-anonymous to the public. Others (8) require a non-anonymous review. In addition to the option of signing or not signing reviews, another solution already mentioned is to protect reviewers under a nickname or a pseudonym. This approach is a way in which preprint review services manage tensions between peer review schools of thought, in particular between the Democracy & Transparency and the Equity & Inclusion schools. PREreview is explicit that it aims to bring “more equity and transparency to scholarly peer review by supporting and empowering communities of researchers, particularly those at early stages of their career”, but in doing so, it arguably also limits transparency. For Hypothesis, enabling reviewers to use a pseudonym promotes “transparency and credibility without public identity,” although it is evident that they are managing a tension between these two aims. In relation to reviewers’ identity, these services attempt to balance the idea of transparency and the accountability it brings with protections for reviewers where they might experience negative consequences if their identities were disclosed.

Anonymity of reviewers also poses a challenge for the Efficiency & Incentives school. To incentivise reviewers, this school emphasises the importance of giving recognition to reviewers. However, when reviewers are anonymous, it is difficult to publicly recognise them for the efforts they make, other than to say a particular reviewer has reviewed for a particular service without any more detail than that. In particular, associating reviewers with the text of the review, and recognising their contribution at that level of detail, becomes difficult. In addition, anonymity of reviewers may also complicate the reuse of reviews, making it difficult for editors to recognise the competency and credibility of reviewers, thereby undermining the desire of the Efficiency & Incentives school to make peer review more efficient.

This highlights how complex and challenging it can be for a preprint review service to manage tensions between different priorities, like transparency, rigour, recognition, and equity and inclusion, with tensions related to anonymity in peer review being among the most challenging ones.

Anonymisation of authors

For the Democracy & Transparency and the Efficiency & Incentives schools, performing peer review after publication, not before, offers an important way to enhance the transparency and efficiency of peer review. Hence, these schools take a critical stance towards double-anonymous approaches to peer review, in which not only reviewers but also authors are anonymous. Anonymity of authors is incompatible with the idea of performing peer review after publication. On the other hand, the Equity & Inclusion school supports double anonymity, arguing that anonymity of authors reduces bias in peer review and therefore contributes to making peer review more equitable and inclusive. As already pointed out, the Quality & Reproducibility school may perceive reducing bias as a way to improve the quality of peer review and may therefore also support double anonymity.

In our set of preprint review services, there are two, Peer Community In and Hypothesis, that state they offer double-anonymous peer review (Table 2). For Peer Community In, double-anonymous peer review is not a regular practice but may happen exceptionally by request to fulfil the author’s needs. To submit an article anonymously to Peer Community In, the authors must use a private web-based interface (e.g., Google Docs, Dropbox or GitHub). Data, scripts and codes must also be made available through a private web-based interface. In this way, the names of the authors do not need to be disclosed to the reviewers evaluating the article, making it possible to have a double-anonymous evaluation. If the article is eventually recommended by the recommender overseeing the peer review process, the authors must deposit a final, non-anonymised version of their article in an open archive. Although Hypothesis also claims to facilitate double-anonymous peer review, it is not clear to us how this is actually done.

Double anonymity is essentially incompatible with preprint review. Still, although it is less transparent, it may be argued to be more inclusive because it attempts to avoid biases related to gender, geography, race, ethnicity, etc. This is clearly a compromise – an attempt to manage the tensions between the drive for transparency, characteristic of preprinting, with that of inclusiveness. By offering double-anonymous evaluation of an article, Peer Community In and Hypothesis offer a peer review process that is very similar to traditional closed journal-based peer review, making us question if one should refer to this as preprint review.

How do preprint review services fit into the publishing landscape?

Preprint review services typically position themselves as complementary to journal-based peer review. The idea of being a “complementary service with no intention of competitiveness” (Peer Community In) is present in the stated aims of almost all preprint review services, which for instance, make the argument that they enable researchers to “innovate and explore new approaches to scientific dissemination, in parallel to the traditional publishing industry” (Peeriodicals). PeerRef states its aim to “provide researchers with greater choice in how their research is shared and evaluated, and eliminate the need for repeated peer review in successive journals” is a key way in which this complementarity works. Some services implement their complementary role in the system by working directly with affiliate, friendly or partner journals, including the emerging overlay journals model (Rousi & Laakso, 2022), such as the ones hosted by Episcences, where open-peer review reports must be available on an open repository, data repository or software heritage. Other services support journals in more indirect ways, for instance by helping editors find “new research they may not have otherwise discovered” (Rapid Reviews: COVID-19) and reducing re-reviewing. In this way, preprint review services arguably help to organise peer review in more efficient ways.

However, the success of preprint review initiatives as a complementary service to journal-based peer review depends on the extent to which journals are going to adjust their processes and are going to make use of preprint reviews. According to Saderi and Greaves (2021), editors may be open to these new developments only under certain conditions. To better understand if and how preprint reviews may help make journal-based peer review processes faster and more efficient, Saderi and Greaves asked editors of the journals involved in the COVID-19 Rapid Review (OASPA, 2020) initiative to fill out a short survey. Based on a small number of responses, they concluded that any progress towards an integrated workflow between preprint reviews and journal-based peer review would take a coordinated effort by different stakeholders and that this effort must focus on building mutual trust: on the one hand, the trust by editors in preprint reviews and, on the other hand, the trust by authors and reviewers that their contributions will be valued and recognised.

If journals are not interested in making use of preprint reviews, preprint review services arguably leave themselves open to the criticism that they put even more pressure on an already overburdened system by adding another layer of review on top of journal-based peer review. Rather than developing ways to improve publishing and peer review workflows, the services might be seen as unnecessarily adding further complexity to the system. The viability of preprint review services may depend on how they address this criticism – for example by arguing that they offer alternative publication options to authors and help to turn peer review into a more rewarding experience for authors and reviewers.

Nevertheless, while almost all preprint review services we examined ostensibly aim to supplement the current journal-based publishing system, some also identify the possibility of more radical change, aiming for more ambitious reforms of scientific publishing, considering that “the quality of published work must stand on its own, without the crutch of impact factors, journal prestige, ‘likes’, ‘thumbs up’, or the artificial stamp of approval signalled by the label “peer review”  (Researchers.One), or to relieve the pressure on authors to publish their work in ’top-tier’, but often paywalled journals” (Plaudit). Despite some resistance from editors to partner with preprint review services, some journals are already exploring review of preprints by themselves, or partnering with preprint review services. This raises a key question: Should preprint review services just be seen as a complement to journal-based publishing, or could they potentially offer an alternative to journal-based publishing?

At present, preprint review services seem to opt for a cautious approach, avoiding a radically disruptive position that could compromise their future development. However, after this first stage of experimentation, the increasing maturity of preprint review services and the involvement of and recognition from the research community may possibly enable these services to position themselves as an alternative, rather than a complement, to journal-based publishing. As discussed by Kramer et al. (2022), the dissemination function of journals may be taken over by preprint servers (although the term ‘preprint’ may no longer be appropriate), while preprint review services may take over the evaluation function of journals. In the most extreme outcome, conventional journals may cease to exist, and scientific publishing may take place entirely on preprint servers and preprint review platforms.

Edmunds (2022), editor in chief at GigaScience Press, argues that as standalone movements, the several models and experiments developed to improve the journal system, “were a harder sell, but now, working together, they interact and support each other synergistically”. Like Edmunds, we believe that preprint review services offer important opportunities to improve the publishing process by improving transparency, accountability, efficiency and speed. Moreover, Edmunds highlights that “research culture has also been changing, with younger researchers seeing transparency and openness as a norm, and funders and publishers endorsing and promoting such efforts”. Edmunds is probably right that “time seems to have finally arrived” for preprinting combined with open and portable peer review.

Conclusions

To provide a systematic understanding of the main characteristics of preprint review services, we carried out a descriptive study of 23 services. We described how the services have been set up to manage preprint review and how they fit into the publishing landscape. Based on this, we identified possible future models of peer review and scholarly communication.

We used the four peer review schools of thought framework (Waltman et al., 2023) to explore how innovation and experimentation promoted by preprint review services can address the challenges of traditional journal-based peer review and provide possible solutions and future directions. Our analysis gave us insight into key issues, such as where the priorities of the different services lie and how they manage tensions between schools, reflecting different perspectives on how peer review can best be organised.

The most important tension that we identified relates to anonymisation of reviewers and authors. In line with the ideas of the Democracy & Transparency school, preprint review services promote more open forms of peer review in which authors and reviewers participate on a more equal basis. However, from the perspective of the Equity & Inclusion school, this raises concerns. To make peer review processes more equitable and inclusive, this school emphasises the importance of enabling anonymisation of reviewers and possibly also authors, which is in tension with the focus on openness and transparency of preprint review services.

Preprint review services have the potential to address a number of key issues in scholarly communication and peer review. By providing quality control, preprint review services can enrich the value of preprints, reinforce confidence in their use, and make the evaluation of research more efficient and more open and transparent. In addition, preprint review services may improve accountability and recognition of reviewers, supplement journal-based peer review, reduce the need for re-submissions, and review at multiple journals.

The scholarly communication landscape seems to be moving toward a mixed system in which preprint servers, preprint review services and journals operate mostly in complementary ways. Preprint review services currently seem to avoid a radically disruptive position. Still, after a period of experimentation, their increasing maturity may enable these services to position themselves as an alternative, rather than a complement, to traditional journal-based publishing. In the longer term, preprint review services may, therefore, disrupt the scholarly communication landscape in a more radical way.

Despite the potential to either complement existing scholarly communication services or even replace them, it is apparent that at present many preprint review services face problems of sustainability. During the period of our analysis, two services, CrowdPeer and Red Team Marked, were taken out of service. Whilst the reasons for their closure are not entirely clear, it is obvious that many of the services we studied are built with small budgets and have uncertain funding streams. They typically rely on the enthusiasm and commitment of volunteers. Achieving financial stability will be a challenge for many of these services, particularly if they wish to maintain the independence from large commercial interests, which many of them currently have.

Acknowledgements

We are grateful to the managers of preprint review services that kindly contributed to our work by clarifying and confirming some details and adding relevant additional information. We thank Jessica Polka from Crowd preprint review, Raphaël Tournoy from Episciences, Natascha Töpfer from Interactive Public Peer Review, Denis Bourguet from Peer Community In, Mitar from Peer Library, Elliott Lumb from PeerRef, Reinier Prosée from preLights, Daniela Saderi from PREreview, Gabriele Marinello from Qeios, Nick Lindsay from Rapid Reviews: COVID-19, Ryan Martin from Researchers.One and Aram Harrow from SciRate. We also extend our gratitude to Stephen Gabrielson, Dibyendu Roy Chowdhury, Ashley Farley, and Gary McDowell of the ASAPbio Meta-Research Crowd for their insightful comments on our work (https://prereview.org/reviews/10210714).

Author contributions

Conceptualization: SOH, NR, SP, LW; Data curation: SOH; Formal analysis: SOH; Funding acquisition: SP, LW; Investigation: SOH; Methodology: SOH, SP, LW; Project administration: SOH, SP, LW; Supervision: SP, LW; Validation: SOH, NR, SP, LW; Visualization: SOH; Writing – original draft: SOH; Writing – review & editing: SOH, NR, SP, LW

Competing interests

Stephen Pinfield and Ludo Waltman are involved in MetaROR (MetaResearch Open Review), a platform for open peer review of preprints in the field of metaresearch.

Funding information

Stephen Pinfield and Ludo Waltman were supported by Wellcome [221297/Z/20/Z] as part of its core funding of the Research on Research Institute (RoRI).

Data availability

The data that support the findings of this study are openly available in figshare.https://doi.org/10.6084/m9.figshare.24307312.v1

References

ASAPbio. (n.d.). ReimagineReview – A registry of platforms and experiments innovating around peer review. ASAPbio. https://reimaginereview.asapbio.org/

Avissar-Whiting, M., Belliard, F., Brand, A., Brown, K., Clément-Stoneham, G., Dawson, S., Dey, G., Ecer, D., Edmunds, S. C., Fischer, T. D., Farley, A., Franko, M., Fraser, J., Funk, K., Ganier, C., Harrison, M., Hatch, A., Hazlett, H., Hindle, S., … Williams, M. (2023). Advancing the culture of peer review with preprints. Open Science Framework.https://doi.org/10.31219/osf.io/cht8p

Bancroft, S. F., Ryoo, K., & Miles, M. (2022). Promoting equity in the peer review process of journal publication. Science Education, 106(5), 1232–1248. https://doi.org/10.1002/sce.21733

Blatch-Jones, A., Saucedo, A. R., & Giddins, B. (2023). The use and acceptability of preprints in health and social care settings: A scoping review. SocArXiv. https://doi.org/10.31235/osf.io/nug4p

Brendel, K. (2021, July 5). Time to assume that health research is fraudulent until proven otherwise? BMJ. https://blogs.bmj.com/bmj/2021/07/05/time-to-assume-that-health-research-is-fraudulent-until-proved-otherwise/

Chambers, C. D., & Tzavella, L. (2022). The past, present and future of Registered Reports. Nature Human Behaviour6(1), 29–42. https://doi.org/10.1038/s41562-021-01193-7

Chloros, G. D., Konstantinidis, C. I., Vasilopoulou, A., & Giannoudis, P. V. (2023). Peer review practices in academic medicine: How the example of orthopaedic surgery may help shift the paradigm? International Orthopaedics47(5), 1137–1145. https://doi.org/10.1007/s00264-023-05729-6

Edmunds, S. (2022). Preprints and open peer review come of age. Research Information. https://www.researchinformation.info/analysis-opinion/preprints-and-open-peer-review-come-age

Errington, T. M., Mathur, M., Soderberg, C. K., Denis, A., Perfito, N., Iorns, E., & Nosek, B. A. (2021). Investigating the replicability of preclinical cancer biology. eLife10, e71601. https://doi.org/10.7554/eLife.71601

Fernandez-Llimos, F. (2023). Comment on the article: “Peer review practices in academic medicine: how the example of orthopaedic surgery may help shift the paradigm?” International Orthopaedics, 47(5), 1391–1392. https://doi.org/10.1007/s00264-023-05768-z

Foster, A., Hindle, S., Murphy, K. M., & Saderi, D. (2021). Open Reviewers Bias Reflection Guide. Zenodo. https://doi.org/10.5281/zenodo.5484052

Fraser, N., Brierley, L., Dey, G., Polka, J. K., Pálfy, M., Nanni, F., & Coates, J. A. (2021). The evolving role of preprints in the dissemination of COVID-19 research and their impact on the science communication landscape. PLOS Biology19(4), e3000959. https://doi.org/10.1371/journal.pbio.3000959

Henriques, Susana Oliveira; Rzayeva, Narmin; Pinfield, Stephen; Waltman, Ludo (2023). Data Set – Preprint review services: Disrupting the scholarly communication landscape?. figshare. Dataset. https://doi.org/10.6084/m9.figshare.24307312.v1

Lee, C. J., Sugimoto, C. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the American Society for Information Science and Technology64(1), 2–17. https://doi.org/10.1002/asi.22784

Lutz, J. F., Sondervan, J., Edig, X. van, Freeman, A., Kramer, B., & Rosenkrantz, C. H. (2023). Knowledge Exchange Analysis Report on Alternative Publishing Platforms. Alternative Publishing Platforms. https://doi.org/10.21428/996e2e37.3ebdc864

Nguyen, V. M., Haddaway, N. R., Gutowsky, L. F. G., Wilson, A. D. M., Gallagher, A. J., Donaldson, M. R., Hammerschlag, N., & Cooke, S. J. (2015). How long is too long in contemporary peer review? Perspectives from authors publishing in conservation biology journals. PLOS ONE10(8), e0132557. https://doi.org/10.1371/journal.pone.0132557

Ni, R., & Waltman, L. (2023). To preprint or not to preprint: A global researcher survey. SocArXiv. https://doi.org/10.31235/osf.io/k7reb

OASPA. (2020). COVID-19 Publishers Open Letter of Intent – Rapid Review. OASPA. https://oaspa.org/covid-19-publishers-open-letter-of-intent-rapid-review/

Okuzaki, Y., Nakamura, S., & Nakaoka, M. (2019). Toward more rapid and efficient peer review: A case study on publication in Ecological Research. Ecological Research34(5), 563–574. https://doi.org/10.1111/1440-1703.12037

Reardon, J., Crawford, J., Allen, K.-A., & Walsh, L. (2022, July 25). The peer review system is broken. We asked academics how to fix it. The Conversation. http://theconversation.com/the-peer-review-system-is-broken-we-asked-academics-how-to-fix-it-187034

Ross-Hellauer, T. (2017). What is open peer review? A systematic review. F1000Research6, 588. https://doi.org/10.12688/f1000research.11369.2

Rousi, A. M., & Laakso, M. (2022). Overlay journals: A study of the current landscape. Journal of Librarianship and Information Science, 096100062211252. https://doi.org/10.1177/09610006221125208

Royal Society of Chemistry. (n.d.). Joint commitment for action on inclusion and diversity in publishing. Royal Society of Chemistryhttps://www.rsc.org/policy-evidence-campaigns/inclusion-diversity/joint-commitment-for-action-inclusion-and-diversity-in-publishing/

RR\C19. (n.d.). Rapid Reviews Infectious Diseases. Retrieved 19 July 2024, from https://rrid.mitpress.mit.edu/rrc19

Saderi, D., & Greaves, S. (2021, July 7). Using preprint reviews to drive journal peer review. ASAPbio. https://asapbio.org/using-preprint-reviews-to-drive-journal-peer-review

Swoger, B. (2013). Can you take it with you when you go? Portable peer review. Scientific American Blog Network.https://blogs.scientificamerican.com/information-culture/can-you-take-it-with-you-when-you-go-portable-peer-review/

Waltman, L., Kaltenbrunner, W., Pinfield, S., & Woods, H. B. (2023). How to improve scientific peer review: Four schools of thought. Learned Publishing36(3), 334–347. https://doi.org/10.1002/leap.1544

Editors

Kathryn Zeiler
Editor-in-Chief

Kathryn Zeiler
Handling Editor

Editorial Assessment

by Kathryn Zeiler

DOI: 10.70744/MetaROR.14.1.ea

The authors present a descriptive analysis of preprint review services. The analysis focuses on the services’ relative characteristics and differences in preprint review management. The authors conclude that such services have the potential to improve the traditional peer review process. Two metaresearchers reviewed the article. They note that the background section and literature review are current and appropriate, the methods used to search for preprint servers are generally sound and sufficiently detailed to allow for reproduction, and the discussion related to anonymizing articles and reviews during the review process is useful. The reviewers also offered suggestions for improvement. They point to terminology that could be clarified. They suggest adding URLs for each of the 23 services included in the study. Other suggestions include explaining why overlay journals were excluded, clarifying the limitation related to including only English-language platforms, archiving rawer input data to improve reproducibility, adding details related to the qualitative text analysis, discussing any existing empirical evidence about misconduct as it relates to different models of peer review, and improving field inclusiveness by avoiding conflation of “research” and “scientific research.”

The reviewers and I agree that the article is a valuable contribution to the metaresearch literature related to peer review processes.

Competing interests: Kathryn Zeiler is co-Editor-in-Chief of MetaROR working with Ludo Waltman, a co-author of the article and co-Editor-in-Chief of MetaROR.

Peer Review 1

Pen-Yuan Hsing

DOI: 10.70744/MetaROR.14.1.rv1

Thank you very much for the opportunity to review the preprint titled “Preprint review services: Disrupting the scholarly communication landscape?” (https://doi.org/10.31235/osf.io/8c6xm) The authors review services that facilitate peer review of preprints, primarily in the STEM (science, technology, engineering, and math) disciplines. They examine how these services operate and their role within the scholarly publishing ecosystem. Additionally, the authors discuss the potential benefits of these preprint peer review services, placing them in the context of tensions in the broader peer review reform movement. The discussions are organized according to four “schools of thought” in peer review reform, as outlined by Waltman et al. (2023), which provides a useful framework for analyzing the services. In terms of methodology, I believe the authors were thorough in their search for preprint review services, especially given that a systematic search might be impractical.

As I see it, the adoption of preprints and reforming peer review are key components of the move towards improving scholarly communication and open research. This article is a useful step along that journey, taking stock of current progress, with a discussion that illuminates possible paths forward. It is also well-structured and easy for me to follow. I believe it is a valuable contribution to the metaresearch literature.

On a high level, I believe the authors have made a reasonable case that preprint review services might make peer review more transparent and rewarding for all involved. Looking forward, I would like to see metaresearch which gathers further evidence that these benefits are truly being realised.

In this review, I will present some general points which merit further discussion or clarification to aid an uninitiated reader. Additionally, I raise one issue regarding how the authors framed the article and categorised preprint review services and the disciplines they serve. In my view, this problem does not fundamentally undermine the robust search, analyses, and discussion in this paper, but it risks putting off some researchers and constrains how broadly one should derive conclusions.

General comments

Some metaresearchers may be aware of preprints, but not all readers will be familiar with them. I suggest briefly defining what they are, how they work, and which types of research have benefited from preprints, similar to how “preprint review service” is clearly defined in the introduction.

Regarding Waltman et al.’s (2023) “Equity & Inclusion” school of thought, does it specifically aim for “balanced” representation by different groups as stated in this article? There is an important difference between “balanced” versus “equitable” representation, and I would like to see it addressed in this text.

Another analysis I would like to see is whether any of the 23 services reviewed present any evidence that their approach has improved research quality. For instance, the discussion on peer review efficiency and incentives states that there is currently “no hard evidence” that journals want to utilise reviews by Rapid Reviews: COVID-19, and that “not all journals are receptive” to partnerships. Are journals skeptical of whether preprint review services could improve research quality? Or might another dynamic be at work?

The authors cite Nguyen et al. (2015) and Okuzaki et al. (2019), stating that peer review is often “overloaded”. I would like to see a clearer explanation by what “overloaded” means in this context so that a reader does not have to read the two cited papers.

To the best of my understanding, one of the major sticking points in peer review reform is whether to anonymise reviewers and/or authors. Consequently, I appreciate the comprehensive discussion about this issue by the authors.

However, I am only partially convinced by the statement that double anonymity is “essentially incompatible” with preprint review. For example, there may be, as yet not fully explored, ways to publish anonymous preprints with (a) a notice that it has been submitted to, or is undergoing, peer review; and (b) that the authors will be revealed once peer review has been performed (e.g. at least one review has been published). This would avoid the issue of publishing only after review is concluded as is the case for Hypothesis and Peer Community In.

Additionally, the authors describe 13 services which aim to “balance transparency and protect reviewers’ interests”. This is a laudable goal, but I am concerned that framing this as a “balance” implies a binary choice, and that to have more of one, we must lose an equal amount of the other. Thinking only in terms of “balance” prevents creative, win-win solutions. Could a case be made for non-anonymity to be complemented by a reputation system for authors and reviewers? For example, major misconduct (e.g. retribution against a critical review) would be recorded in that system and dissuade bad actors. Something similar can already be seen in the reviewer evaluation system of CrowdPeer, which could plausibly be extended or modified to highlight misconduct.

I also note that misconduct and abusive behaviour already occur even in fully or partially anonymised peer review, and they are not limited to the review or preprints. While I am not aware of existing literature on this topic, academics’ fears seem reasonable. For example, there is at least anecdotal testimonies that a reviewer would deliberately reject a paper to retard the progress of a rival research group, while taking the ideas of that paper and beating their competitors to winning a grant. Or, a junior researcher might refrain from giving a negative review out of fear that the senior researcher whose work they are reviewing might retaliate. These fears, real or not, seem to play a part in the debates about if and how peer review should (or should not) be anonymised. I would like to see an exploration of whether de-anonimisation will improve or worsen this behaviour and in what contexts. And if such studies exist, it would be good to discuss them in this paper.

I found it interesting that almost all preprint review services claim to be complementary to, and not compete with, traditional journal-based peer review. The methodology described in this article cannot definitely explain what is going on, but I suspect there may be a connection between this aversion to compete with traditional journals, and (a) the skepticism of journals towards partnering with preprint review services and (b) the dearth of publisher-run options. I hypothesise that there is a power dynamic at play, where traditional publishers have a vested interest in maintaining the power they hold over scholarly communication, and that preprint review services stress their complementarity (instead of competitiveness) as a survival mechanism. This may be an avenue for further metaresearch.

To understand preprints from which fields of research are actually present on the services categorised under “all disciplines,” I used the Random Integer Set Generator by the Random.org true random number service (https://www.random.org/integer-sets/) to select five services for closer examination: Hypothesis, Peeriodicals, PubPeer, Qeios, and Researchers One. Of those, I observed that Hypothesis is an open source web annotation service that allows commenting on and discussion of any web page on the Internet regardless of whether it is research or preprints. Hypothesis has a sub-project named TRiP (Transparent Review in Preprints), which is their preprint review service in collaboration with Cold Spring Harbor Laboratory. It is unclear to me why the authors listed Hypothesis as the service name in Table 1 (and elsewhere) instead of TRiP (or other similar sub-projects). In addition, Hypothesis seems to be framed as a generic web annotation service that is used by some as a preprint review tool. This seems fundamentally different from others who are explicitly set up as preprint review services. This difference seems noteworthy to me.

To aid readers, I also suggest including hyperlinks to the 23 services reviewed in this paper. My comments on disciplinary representation in these services are elaborated further below.

One minor point of curiosity is that several services use an “automated tool” to select reviewers. It would be helpful to describe in this paper exactly what those tools are and how they work, or report situations where services do not explain it.

Lastly, what did the authors mean by “software heritage” in section 6? Are they referring to the organisation named Software Heritage (https://www.softwareheritage.org/) or something else? It is not clear to me how preprint reviews would be deposited in this context.

Respecting disciplinary and epistemic diversity

In the abstract and elsewhere in the article, the authors acknowledge that preprints are gaining momentum “in some fields” as a way to share “scientific” findings. After reading this article, I agree that preprint review services may disrupt publishing for research communities where preprints are in the process of being adopted or already normalised. However, I am less convinced that such disruption is occurring, or could occur, for scholarly publishing more generally.

I am particularly concerned about the casual conflation of “research” and “scientific research” in this article. Right from the start, it mentions how preprints allow sharing “new scientific findings” in the abstract, stating they “make scientific work available rapidly.” It also notes that preprints enable “scientific work to be accessed in a timely way not only by scientists, but also…” This framing implies that all “scholarly communication,” as mentioned in the title, is synonymous with “scientific communication.” Such language excludes researchers who do not typically identify their work as “scientific” research. Another example of this conflation appears in the caption for Figure 1, which outlines potential benefits of preprint review services. Here, “users” are defined as “scientists, policymakers, journalists, and citizens in general.” But what about researchers and scholars who do not see themselves as “scientists”?

Similarly, the authors describe the 23 preprint review services using six categories, one of which is “scientific discipline”. One of those disciplines is called “humanities” in the text, and Table 1 lists it as a discipline for Science Open Reviewed. Do the authors consider “humanities” to be a “scientific” discipline? If so, I think that needs to be justified with very strong evidence.

Additionally, Waltman et al.’s four schools of thought for peer review reform works well with the 23 services analysed. However, at least three out of the four are explicitly described as improving “scientific” research.

Related to the above are how the five “scientific disciplines” are described as the “usual organisation” of the scholarly communication landscape. On what basis should they be considered “usual”? In this formulation, research in literature, history, music, philosophy, and many other subjects would all be lumped together into the “humanities”, which sit at the same hierarchical level as “biomedical and life sciences”, arguably a much more specific discipline. My point is not to argue for a specific organisation of research disciplines, but to highlight a key epistemic assumption underlying the whole paper that comes across as very STEM-centric (science, technology, engineering, and math).

How might this part of the methodology affect the categories presented in Table 1? “Biomedical and life sciences” appear to be overrepresented compared to other “disciplines”. I’d like to see a discussion that examines this pattern, and considers why preprint review services (or maybe even preprints more generally) appear to cover mostly the biomedical or physical sciences.

In addition, there are 12 services described as serving “all disciplines”. I believe this paper can be improved by at least a qualitative assessment of the diversity of disciplines actually represented on those services. Because it is reported that many of these service stress improving the “reproducibility” of research, I suspect most of them serve disciplines which rely on experimental science.

I randomly selected five services for closer examination, as mentioned above. Of those, only Qeios has demonstrated an attempt to at least split “arts and humanities” into subfields. The others either don’t have such categories altogether, or have a clear focus on a few disciplines (e.g. life sciences for Hypothesis/TRiP). In all cases I studied, there is a heavy focus on STEM subjects, especially biology or medical research. However, they are all categorised by the authors as serving “all disciplines”.

If preprint review services originate from, or mostly serve, a narrow range of STEM disciplines (especially experiment-based ones), it would be worth examining why that is the case, and whether preprints and reviews of them could (or could not) serve other disciplines and epistemologies.

It is postulated that preprint review services might “disrupt the scholarly communication landscape in a more radical way”. Considering the problematic language I observed, what about fields of research where peer-reviewed journal publications are not the primary form of communication? Would preprint review services disrupt their scholarly communications?

To be clear, my concern is not just the conflation of language in a linguistic sense but rather inequitable epistemic power. I worry that this conflation would (a) exclude, minoritise, and alienate researchers of diverse disciplines from engaging with metaresearch; and (b) blind us from a clear pattern in these 23 services, that is their strong focus on the life sciences and medical research and a discussion of why that might be the case. Critically, what message are we sending to, for example, a researcher of 18th century French poetry with the language and framing of this paper? I believe the way “disciplines” are currently presented here poses a real risk of devaluing and minoritising certain subject areas and ways of knowing. In its current form, I believe that while this paper is a very valuable contribution, one should not derive from it any conclusions which apply to scholarly publishing as a whole.

The authors have demonstrated inclusive language elsewhere. For example, they have consciously avoided “peer” when discussing preprint review services, clearly contrasting them to “journal-based peer review”. Therefore, I respectfully suggest that similar sensitivity be adopted to avoid treating “scientific research” and “research” as the same thing. A discussion, or reference to existing works, on the disciplinary skew of preprints (and reviews of them) would also add to the intellectual rigour of this already excellent piece.

Overall, I believe this paper is a valuable reflection on the state of preprints and services which review them. Addressing the points I raised, especially the use of more inclusive language with regards to disciplinary diversity, would further elevate its usefulness in the metaresearch discourse. Thank you again for the chance to review.

Signed:

Dr Pen-Yuan Hsing (ORCID ID: 0000-0002-5394-879X)

University of Bristol, United Kingdom

Data availability

I have checked the associated dataset, but still suggest including hyperlinks to the 23 services analysed in the main text of this paper.

Competing interests: None.

Peer Review 2

Ross Mounce

DOI: 10.70744/MetaROR.14.1.rv2

This manuscript examines preprint review services and their role in the scholarly communications ecosystem.  It seems quite thorough to me. In Table 1 they list many peer-review services that I was unaware of e.g. SciRate and Sinai Immunology Review Project.

To help elicit critical & confirmatory responses for this peer review report I am trialling Elsevier’s suggested “structured peer review” core questions, and treating this manuscript as a research article.

Introduction

  1. Is the background and literature section up to date and appropriate for the topic?

    Yes.

  2. Are the primary (and secondary) objectives clearly stated at the end of the introduction?

    No. Instead the authors have chosen to put the two research questions on page 6 in the methods section. I wonder if they ought to be moved into the introduction – the research questions are not methods in themselves. Might it be better to state the research questions first and then detail the methods one uses to address those questions afterwards? [as Elsevier’s structured template seems implicitly to prefer.

Methods

  1. Are the study methods (including theory/applicability/modelling) reported in sufficient detail to allow for their replicability or reproducibility?

    I note with approval that the version number of the software they used (ATLAS.ti) was given.

    I note with approval that the underlying data is publicly archived under CC BY at figshare.

    The Atlas.ti report data spreadsheet could do with some small improvement – the column headers are little cryptic e.g. “Nº  ST “ and “ST” which I eventually deduced was Number of Schools of Thought and Schools of Thought (?)

    Is there a rawer form of the data that could be deposited with which to evidence the work done? The Atlas.ti report spreadsheet seemed like it was downstream output data from Atlas.ti. What was the rawer input data entered into Atlas.ti? Can this be archived somewhere in case researchers want to reanalyse it using other tools and methods.

    I note with disapproval that Atlas.ti is proprietary software which may hinder the reproducibility of this work. Nonetheless I acknowledge that Atlas.ti usage is somewhat ‘accepted’ in social sciences despite this issue.

    I think the qualitative text analysis is a little vague and/or under-described: “Using ATLAS.ti Windows (version 23.0.8.0), we carried out a qualitative analysis of text from the relevant sites, assigning codes covering what they do and why they have chosen to do it that way.” That’s not enough detail. Perhaps an example or two could be given? Was inter-rater reliability performed when ‘assigning codes’ ? How do we know the ‘codes’ were assigned accurately?

  2. Are statistical analyses, controls, sampling mechanism, and statistical reporting (e.g., P-values, CIs, effect sizes) appropriate and well described?

    This is a descriptive study (and that’s fine) so there aren’t really any statistics on show here other than simple ‘counts’ (of Schools of Thought) in this manuscript. There are probably some statistical processes going on within the proprietary qualitative analysis of text done in ATLAS.ti but it is under described and so hard for me to evaluate. 

Results

  1. Is the results presentation, including the number of tables and figures, appropriate to best present the study findings?

    Yes. However, I think a canonical URL to each service should be given.  A URL is very useful for disambiguation, to confirm e.g. that the authors mean this Hypothesis (www.hypothes.is) and NOT this Hypothesis (www.hyp.io). I know exactly which Hypothesis is the one the authors are referring to but we cannot assume all readers are experts 😊

    Optional suggestion: I wonder if the authors couldn’t present the table data in a slightly more visual and/or compact way? It’s not very visually appealing in its current state. Purely as an optional suggestion, to make the table more compact one could recode the answers given in one or more of the columns 2, 3 and 4 in the table e.g. “all disciplines = ⬤ , biomedical and life sciences = ▲, social sciences =  ‡  , engineering and technology = † “. I note this would give more space in the table to print the URLs for each service that both reviewers have requested.

    Service name Developed by Scientific disciplines Types of outputs
    Episciences Other blah blah blah.
    Faculty Opinions Individual researcher blah blah blah.
    Red Team Market Individual researcher blah blah blah.

     

    The “Types of outputs” column might even lend themselves to mini-colour-pictograms (?) which could be more concise and more visually appealing? A table just of text, might be scientifically ‘correct’ but it is incredibly dull for readers, in my opinion.

  2. Are additional sub-analyses or statistical measures needed (e.g., reporting of CIs, effect sizes, sensitivity analyses)?

    No / Not applicable. 

Discussion

  1. Is the interpretation of results and study conclusions supported by the data and the study design?

    Yes.

  2. Have the authors clearly emphasized the limitations of their study/theory/methods/argument?

    No. Perhaps a discussion of the linguistic/comprehension bias of the authors might be appropriate for this manuscript. What if there are ‘local’ or regional Chinese, Japanese, Indonesian or Arabic language preprint review services out there? Would this authorship team really be able to find them?

Additional points:

  • Perhaps the points made in this manuscript about financial sustainability (p24) are a little too pessimistic. I get it, there is merit to this argument, but there is also some significant investment going on there if you know where to look. Perhaps it might be worth citing some recent investments e.g. Gates -> PREreview (2024) https://content.prereview.org/prereview-welcomes-funding/  and Arcadia’s $4 million USD to COAR for the Notify Project which supports a range of preprint review communities including Peer Community In, Episciences, PREreview and Harvard Library.  (source: https://coar-repositories.org/news-updates/coar-welcomes-significant-funding-for-the-notify-project/ )

  • Although I note they are mentioned, I think more needs to be written about the similarity and overlap between ‘overlay journals’ and preprint review services. Are these arguably not just two different terms for kinda the same thing? If you have Peer Community In which has it’s overlay component in the form of the Peer Community Journal, why not mention other overlay journals like Discrete Analysis and The Open Journal of Astrophysics.   I think Peer Community In (& it’s PCJ) is the go-to example of the thin-ness of the line the separates (or doesn’t!) overlay journals and preprint review services. Some more exposition on this would be useful.

Competing interests: None.

Author Response

DOI: 10.70744/MetaROR.14.1.ar

We would like to sincerely thank the editor and the two reviewers for their meticulous evaluation and constructive feedback. Their thoughtful comments and insightful suggestions have been invaluable in enhancing the clarity, and strengthening the overall quality of the manuscript. We are truly grateful for the time and expertise they have devoted to improving this work.

In the revised version of the manuscript, we have therefore improved its structure, clarified the study’s focus, and enhanced the clarity of its presentation.

We hope that the revised version more effectively articulates the motivation and design of the study and that our detailed responses below comprehensively address the constructive and thoughtful feedback from each reviewer.

Response to Reviewer 1 (Pen-Yuan Hsing)

[General comments]

We sincerely thank the reviewer for their thoughtful feedback, as well as for the valuable suggestions to improve the manuscript’s structure. The comments were highly constructive and helped improve the study’s focus and the clarity of its presentation.

Reviewer comment:

“Some metaresearchers may be aware of preprints, but not all readers will be familiar with them. I suggest briefly defining what they are, how they work, and which types of research have benefited from preprints, similar to how “preprint review service” is clearly defined in the introduction.”

Following the reviewer’s suggestion, we have expanded the Introduction section to briefly define preprints, explain how they work, and indicate which types of research have benefited from them.

Reviewer comment:

“Regarding Waltman et al.’s (2023) “Equity & Inclusion” school of thought, does it specifically aim for “balanced” representation by different groups as stated in this article? There is an important difference between “balanced” versus “equitable” representation, and I would like to see it addressed in this text.”

According to Waltman et al. (2023), the “Equity & Inclusion” school of thought “promotes a balanced representation of different groups of researchers in the peer review system (in particular in gatekeeping roles as editor or reviewer) in order to create a more diverse and inclusive research system as a whole.” For this reason, we referred to the aim of achieving “balanced” representation across different groups.

Reviewer comment:

“Another analysis I would like to see is whether any of the 23 services reviewed present any evidence that their approach has improved research quality. For instance, the discussion on peer review efficiency and incentives states that there is currently “no hard evidence” that journals want to utilise reviews by Rapid Reviews: COVID-19, and that “not all journals are receptive” to partnerships. Are journals skeptical of whether preprint review services could improve research quality? Or might another dynamic be at work?”

Unfortunately, based on the information available in ReimagineReview and in the supplementary sources we consulted (e.g., service websites, blog posts, and scientific articles), it was not possible to present evidence on whether the work of the analyzed preprint review services has improved research quality. Nevertheless, we recognize the importance of this issue and intend to examine it in greater depth in a follow-up study currently in development.

Reviewer comment:

“The authors cite Nguyen et al. (2015) and Okuzaki et al. (2019), stating that peer review is often “overloaded”. I would like to see a clearer explanation by what “overloaded” means in this context so that a reader does not have to read the two cited papers.”

To clarify the concept of “overloaded” as used by Nguyen et al. (2015) and Okuzaki et al. (2019), we have expanded our explanation in the revised manuscript, making clear that “overloaded” refers to excessive review times and the growing burden on reviewers and editors contributing to substantial delays in publication.

Reviewer comment:

“To the best of my understanding, one of the major sticking points in peer review reform is whether to anonymise reviewers and/or authors. Consequently, I appreciate the comprehensive discussion about this issue by the authors.

However, I am only partially convinced by the statement that double anonymity is “essentially incompatible” with preprint review.

Our view that double anonymity is “essentially incompatible” with preprint review is based on the fact that preprinting is explicitly intended to promote transparency by making research outputs, including the names of the authors, openly accessible at an early stage and fostering open discussion involving named authors, whereas double-anonymous peer review relies on protecting author and reviewer identities and therefore, while potentially more inclusive, tends to be less transparent and incompatible with early sharing of research results. When authors share their research results at an early stage by preprinting their work, reviewers can always look up the preprinted work and see the identities of the authors, which means peer review cannot be truly double-anonymous. For this reason, we have highlighted in the manuscript the efforts of some of the analyzed services to “manage the tensions between the drive for transparency, characteristic of preprinting, and the pursuit of inclusiveness”.

Reviewer comment:

“I would like to see an exploration of whether de-anonimisation will improve or worsen this behaviour and in what contexts. And if such studies exist, it would be good to discuss them in this paper.”

We agree it is important to explore whether de-anonimisation will improve or worsen misconduct and abusive behaviour and in what contexts. Given the descriptive nature of our work, we are unable to address this issue in more depth within the current manuscript. However, we intend to examine the issue in greater depth in a follow-up study currently in development.

Reviewer comment:

“I found it interesting that almost all preprint review services claim to be complementary to, and not compete with, traditional journal-based peer review. The methodology described in this article cannot definitely explain what is going on, but I suspect there may be a connection between this aversion to compete with traditional journals, and (a) the skepticism of journals towards partnering with preprint review services and (b) the dearth of publisher-run options. I hypothesise that there is a power dynamic at play, where traditional publishers have a vested interest in maintaining the power they hold over scholarly communication, and that preprint review services stress their complementarity as a survival mechanism. This may be an avenue for further metaresearch.”

We agree that this is an interesting feature of the data. Given the descriptive nature of our work, it is not possible in this study to explain why almost all preprint review services claim to be complementary to, and not compete with, traditional journal-based peer review. We recognize the importance of this issue and intend to further develop this discussion in future work.

Reviewer comment:

“To aid readers, I also suggest including hyperlinks to the 23 services reviewed in this paper.”

We fully agree that providing URLs would be valuable for disambiguation. However, by the time we finalized our manuscript, some URLs were no longer available, and some services have since evolved into different things, as noted in the Methods section of the manuscript:

“The websites of two preprint review services included in our initial set became unavailable after November 2022.”

“On the other hand, some services have evolved… This is the case for Rapid Reviews: COVID-19, which has expanded beyond COVID-19.”

Although we acknowledge the inherent risk of URLs becoming outdated, we decided to include them in the manuscript in accordance with the reviewer’s suggestion (see Table 1). The revised version of Table 1 is also available on Figshare.

Reviewer comment:

“One minor point of curiosity is that several services use an “automated tool” to select reviewers. It would be helpful to describe in this paper exactly what those tools are and how they work, or report situations where services do not explain it.”

Given the descriptive nature of our work, we are unable to address this issue in more depth within the current manuscript. However, we intend to examine the issue in greater depth in a follow-up study currently in development.

Reviewer comment:

“Lastly, what did the authors mean by “software heritage” in section 6? Are they referring to the organisation named Software Heritage (https://www.softwareheritage.org/) or something else? It is not clear to me how preprint reviews would be deposited in this context.”

We confirm that we were referring to the organization Software Heritage (https://www.softwareheritage.org/). On reflection, however, we have decided to delete the text referring to Software Heritage.

[Respecting disciplinary and epistemic diversity]

We appreciate the reviewer’s thoughtful questions regarding the following terminological issues. Detailed responses to each of the points raised are provided below.

“research” and “scientific research” – to ensure consistency, we have standardized the terminology by using the term “research” throughout, and the expression “scientific research” no longer appears in the manuscript.

“users” in the caption for Figure 1 – the term “users” is intended as a broad designation, without any intention of excluding anyone. Nevertheless, following the reviewer’s suggestion, to improve clarity and maintain consistency throughout the manuscript, we have opted to use the term “researchers” as a general designation encompassing scientists, scholars, and others engaged in research activities. We believe this single inclusive term offers the intended breadth of reference.

“scientific disciplines” – in the manuscript, the five broad areas – Physical Sciences, Biomedical and Life Sciences, Engineering and Technology, Social Sciences, and Humanities – are described as the “usual organization” of the scholarly communication landscape because they reflect a widely recognized, high-level disciplinary structure used by international organizations and classification systems. Although there are other alternative disciplinary taxonomies, this five-fold division provides a commonly accepted point of reference for analyzing and comparing developments across the scholarly communication system.

“all disciplines” – as mentioned in the manuscript, we used the category “All Disciplines” to designate services that host work from any scientific field or that adopt a multidisciplinary approach without clearly defining the disciplines or sub-disciplines included. Our qualitative analysis relied on the information available in ReimagineReview, supplemented with desk research using online sources such as the websites of preprint review services, blog posts, and scientific articles. Unlike the reviewer’s own closer examination of a subset of services, we did not undertake this level of detailed assessment within the scope of the present study. Nevertheless, we recognize the need for further clarification and intend to explore this issue in greater depth in a follow-up study currently in development.

Response to Reviewer 2 (Ross Mounce)

We sincerely thank the reviewer for their thoughtful feedback, as well as for the valuable suggestions to improve the manuscript’s structure. The comments were highly constructive and helped improve the study’s focus and the clarity of its presentation.

Reviewer comment:

“…the authors have chosen to put the two research questions on page 6 in the methods section. I wonder if they ought to be moved into the introduction – the research questions are not methods in themselves. Might it be better to state the research questions first and then detail the methods one uses to address those questions afterwards?”

We agree that presenting the research questions in the introduction provides a clearer structure and better context for the study. Accordingly, we have moved the research questions from the Methods section to the Introduction section. In addition, we now also refer explicitly to the research questions in the Conclusions section.

Reviewer comment:

“The Atlas.ti report data spreadsheet could do with some small improvement – the column headers are little cryptic e.g. “Nº ST “ and “ST” which I eventually deduced was Number of Schools of Thought and Schools of Thought (?)”

With regard to the Atlas.ti report data spreadsheet available at figshare, we agree that the column headers were a little cryptic and so a new version of the table (Atlas.ti_Report_Preprint_Review_Services_vs_Schools_V2) is now available in figshare, with the headers: “Number of Schools of Thought” and “Schools of Thought” (instead of “Nº ST “ and “ST”).

Reviewer comment:

“Is there a rawer form of the data that could be deposited with which to evidence the work done? The Atlas.ti report spreadsheet seemed like it was downstream output data from Atlas.ti. What was the rawer input data entered into Atlas.ti? Can this be archived somewhere in case researchers want to reanalyse it using other tools and methods.”

We fully agree on the importance of ensuring open access to the raw input data analyzed in Atlas.ti, so that other researchers may reanalyze it using different tools and methods. For this reason, the complete set of raw input data for each service has been made openly available on Figshare as Word documents (e.g., Hypothesis.docx, Peer Community in.docx).

Reviewer comment:

“I think the qualitative text analysis is a little vague and/or under-described: “Using ATLAS.ti Windows (version 23.0.8.0), we carried out a qualitative analysis of text from the relevant sites, assigning codes covering what they do and why they have chosen to do it that way.” That’s not enough detail. Perhaps an example or two could be given? Was inter-rater reliability performed when ‘assigning codes’ ? How do we know the ‘codes’ were assigned accurately?”

We agree that the description of the qualitative text analysis could be more detailed, and so we have added some additional information in the Methods section. It should be noted for clarification that we did not conduct a formal inter-rater reliability assessment. Instead, to ensure accuracy in the coding process, one of the authors (SOH) first applied the coding scheme to the dataset. To ensure accuracy in the coding process, codes were then refined and validated through group discussion with the other authors, and consensus was reached regarding any changes, thereby ensuring consistency. Where there was ambiguity in project descriptions, we contacted the service managers to clarify and confirm some details. As noted in the manuscript, the four peer review schools of thought were originally proposed by some of us (Waltman et al., 2023), which enabled us to apply the framework with greater confidence when evaluating the characteristics of the 23 preprint review services against the key issues raised by these schools.

Reviewer comment:

“This is a descriptive study (and that’s fine) so there aren’t really any statistics on show here other than simple ‘counts’ (of Schools of Thought) in this manuscript. There are probably some statistical processes going on within the proprietary qualitative analysis of text done in ATLAS.ti but it is under-described and so hard for me to evaluate.”

As the reviewer correctly noted, this is a descriptive study, and no statistical analyses were performed using ATLAS.ti.

Reviewer comment:

“I think a canonical URL to each service should be given. A URL is very useful for disambiguation, to confirm e.g. that the authors mean this Hypothesis DONE (www.hypothes.is) and NOT this Hypothesis (www.hyp.io). I know exactly which Hypothesis is the one the authors are referring to but we cannot assume all readers are experts.”

We fully agree that providing URLs would be valuable for disambiguation. However, by the time we finalized our manuscript, some URLs were no longer available, and some services have since evolved into different things, as noted in the Methods section:

“The websites of two preprint review services included in our initial set became unavailable after November 2022.”

“On the other hand, some services have evolved… This is the case for Rapid Reviews: COVID-19, which has expanded beyond COVID-19.”

While acknowledging the inherent risk of URLs becoming outdated, we have included URLs in the revised manuscript in accordance with the reviewer’s suggestion (see Table 1). The revised version of Table 1 is also available on Figshare.

Reviewer comment:

“I wonder if the authors couldn’t present the table data in a slightly more visual and/or compact way? It’s not very visually appealing in its current state. Purely as an optional suggestion, to make the table more compact one could recode the answers given in one or more of the columns 2, 3 and 4 in the table e.g. “all disciplines = ⬤ , biomedical and life sciences = ▲, social sciences = ‡ , engineering and technology = † “. I note this would give more space in the table to print the URLs for each service that both reviewers have requested.”

After exploring several visual approaches, to present the table data in a more visual and/or compact format, we found that maintaining the current text-based format provides clearer information and facilitates better understanding. While it may be less visually appealing, we believe this format conveys the data more effectively and comprehensively.

Reviewer comment:

“Perhaps a discussion of the linguistic/comprehension bias of the authors might be appropriate for this manuscript. What if there are ‘local’ or regional Chinese, Japanese, Indonesian or Arabic language preprint review services out there? Would this authorship team really be able to find them?”

In the revised manuscript, we explicitly acknowledge this issue in the Methods section:

“For pragmatic reasons, we focus exclusively on English-language services, so services that may exist in other languages are not included in our study.”

Reviewer comment:

“Perhaps the points made in this manuscript about financial sustainability (p24) are a little too pessimistic. I get it, there is merit to this argument, but there is also some significant investment going on there if you know where to look. Perhaps it might be worth citing some recent investments e.g. Gates -> PREreview (2024) https://content.prereview.org/prereview-welcomes-funding/ and Arcadia’s $4 million USD to COAR for the Notify Project which supports a range of preprint review communities including Peer Community In, Episciences, PREreview and Harvard Library. (source: https://coar-repositories.org/news-updates/coar-welcomes-significant-funding-for-the-notify-project/ )”

We thank the reviewer for this valuable observation and for pointing out recent examples of significant investment in the field. We agree that our discussion of sustainability may appear somewhat pessimistic. Within our sample of 23 preprint review services, however, we observed considerable variation in organizational models, structures, teams, budgets, and services offered. While some initiatives, such as those highlighted by the reviewer, are growing and benefiting from promising funding opportunities, others continue to struggle with sustainability. Overall, we therefore consider sustainability to remain one of the key challenges facing these services. We also recognize the importance of this issue and plan to explore it in greater depth in a follow-up study currently in development.

Reviewer comment:

“Although I note they are mentioned, I think more needs to be written about the similarity and overlap between ‘overlay journals’ and preprint review services. Are these arguably not just two different terms for kinda the same thing? If you have Peer Community In which has it’s overlay component in the form of the Peer Community Journal, why not mention other overlay journals like Discrete Analysis and The Open Journal of Astrophysics. I think Peer Community In (& it’s PCJ) is the go-to example of the thin-ness of the line the separates (or doesn’t!) overlay journals and preprint review services. Some more exposition on this would be useful.”

Regarding the overlap between overlay journals and preprint review services, we agree that the distinction is often thin. While our manuscript focuses on preprint review services, we now explicitly acknowledge in the Introduction section that the distinction between preprint review services and overlay journals is often thin, with examples such as Peer Community In (and its Peer Community Journal), and Rapid Reviews: Infectious Diseases (formerly known as Rapid Reviews: COVID-19) illustrating their close overlap.

Leave a comment