Published at MetaROR

February 11, 2026

Table of contents

Available versions
Cite this article as:

Henriques, S. O., Rzayeva, N., Pinfield, S., & Waltman, L. (2025, November 5). Preprint review services: Disrupting the scholarly communication landscape?. https://doi.org/10.31235/osf.io/8c6xm_v3

Curated

Article

Preprint review services: Disrupting the scholarly communication landscape?

Susana Oliveira Henriques1,2,3 Email ORCID, Narmin Rzayeva1,2,4 Email ORCID, Stephen Pinfield1,5 Email ORCID, Ludo Waltman1,2 Email ORCID

1. Research on Research Institute (RoRI)
2. Centre for Science and Technology Studies (CWTS), Leiden University
3. Central Library, Lisbon University Medical School
4. Azerbaijan University of Architecture and Construction
5. Information School, University of Sheffield

Originally published on November 5, 2025 at: 

Abstract

Preprinting has gained considerable momentum, and in some fields it has turned into a well-established way to share new scientific findings. The possibility to organise quality control and peer review for preprints is also increasingly highlighted, leading to the development of preprint review services. We report a descriptive study of preprint review services with the aim of developing a systematic understanding of the main characteristics of these services, evaluating how they manage preprint review, and positioning them in the broader scholarly communication landscape. Our study shows that preprint review services have the potential to turn peer review into a more transparent and rewarding experience and to improve publishing and peer review workflows. We are witnessing the growth of a mixed system in which preprint servers, preprint review services and journals operate mostly in complementary ways. In the longer term, however, preprint review services may disrupt the scholarly communication landscape in a more radical way.

1. Introduction

A preprint may be defined as an author’s version of a research manuscript prior to formal peer review by a journal, which is deposited on a public server (Berg et al., 2016). Posting preprints is an open science practice that helps to make scholarly publishing faster and more transparent (Ni & Waltman, 2024). Preprint servers make scientific work available rapidly (albeit usually in a form prior to peer review) and also openly, enabling scientific work to be accessed in a timely way not only by scientists but also by policymakers, journalists and others. Preprints are well-established in some fields but not in all. The adoption of preprinting is highest in the physical and mathematical sciences, particularly among researchers in the Americas and Europe (Rzayeva et al., 2025). The COVID-19 pandemic led to an unprecedented rise in the use of preprints by the biomedical research community. Preprints were an essential part of the communication of research about COVID-19, useful in particular as a way of accelerating communication of research results. Another potential benefit of preprints is that they allow authors to receive and incorporate feedback from the wider community prior to journal publication (Fraser et al., 2021). Nevertheless, due to the non-peer- reviewed nature of preprints, concerns about the lack of quality assurance remain (Blatch-Jones et al., 2023; Ni & Waltman, 2024). Partly in response to these concerns, there are a growing number of services that facilitate evaluative peer feedback (e.g., comments, recommendations, reviews) on preprints. Some of these services may even be seen as alternatives to journal-based peer review, pointing towards possible future approaches to overcoming weaknesses of the journal-based peer review system.

In Figure 1, we summarise some of the potential benefits of services facilitating the review of preprints, as they are commonly presented in sources such as the websites of preprint review services, blog posts and scientific articles. In this paper, we will explore how preprint review services are trying to achieve these benefits and how they may add value to the scholarly communication system, pointing to a cultural shift in peer review (Avissar-Whiting et al., 2024).

Figure 1. Potential benefits of services facilitating the review of preprints. *By users, we mean researchers, policymakers, journalists, and citizens in general.

Our aim is to develop a systematic understanding of the main characteristics of preprint review services in order to evaluate how these services manage preprint review and to position these services in the wider scholarly communication and peer review environment, including journal publishing.

By a preprint review service, we mean a journal-independent peer review service for articles posted on a preprint server, where the peer review process is publicly visible. We note that the distinction between preprint review services and overlay journals (Rousi & Laakso, 2022) is often thin, with examples such as Peer Community In (and its Peer Community Journal) and Rapid Reviews: Infectious Diseases (formerly known as Rapid Reviews: COVID-19) illustrating their close overlap. We also note that preprint review services are sometimes also referred to as publish-review-curate services (Corker et al., 2024).

We report here a descriptive study based on a qualitative analysis of data available in the ReimagineReview registry and other online sources, such as the websites of preprint review services, blog posts and scientific articles. This paper is complementary to a study recently reported by Lutz et al. (2023) of an ongoing survey of Alternative Publishing Platforms. By providing additional information on preprint review services, we are contributing to the work of gaining a better understanding of the landscape formed by these platforms and how they can be placed in the open scholarly communication ecosystem.

We have opted not to use the term ‘peer’ when discussing platforms for the review of preprints, adopting instead the designation ‘preprint review services’, distinguishing them from ‘journal-based peer review’. This better reflects the diversity of review approaches facilitated by preprint review services — from the spontaneous posting of feedback by individuals to community-driven review platforms — and considering that feedback on preprints is not bound by the conventions often associated with journal- based peer review — in fact, a variety of forms of preprint feedback are evident, ranging from minimal ‘ratings’ and informal approaches to in-depth formal peer review (Avissar-Whiting et al., 2023).

In our work, we use as an analytical lens a framework recently proposed by some of us (Waltman et al., 2023) in which a distinction is made between four schools of thought on innovation in peer review. The four schools, referred to as the Quality & Reproducibility school, the Democracy & Transparency school, the Equity & Inclusion school, and the Efficiency & Incentives school, provide a framework for organising innovations in the peer review space, grouping them in terms of how they perceive problems of peer review and develop solutions in response to these problems. We discuss these in detail in the analysis below. We suggest that the four schools framework offers a useful way to better understand the complexity of innovation in preprint review.

Our work addresses the following research questions:

RQ1 – How do preprint review services differ from, and what do they have in common with traditional peer review?

RQ2 – How are preprint review services linked with the four peer review schools of thought on innovation in peer review?

RQ3 – How do preprint review services manage connections between the four peer review schools of thought when adopting ideas from multiple schools?

RQ4 – How can preprint review supplement journal-based peer review or offer an alternative for journal-based peer review?

Our paper is organised as follows. Following a description of our methods in Section 2, Section 3 provides an overview of services facilitating the review of preprints. Section 4 discusses the contribution of preprint review services in addressing the various issues raised by the four schools of thought on peer review. Section 5 explores how preprint review services are managing apparent tensions between different aims, focusing on tensions related to anonymisation of reviewers and authors. Section 6 discusses how preprint review services fit into the publishing landscape and how they may influence the development of this landscape, and Section 7 summarises our findings for each of our four research questions.

2. Methods

We carried out a descriptive study using qualitative analysis. We used ReimagineReview (RR) as a starting point to identify services and initiatives to facilitate the review of preprints. RR is a registry of services and experiments around peer review of scientific outputs created by ASAPbio (n.d.). For additional information, we supplemented the directory data with desk research using online sources such as the websites of preprint review services, blog posts and scientific articles. For pragmatic reasons, we focus exclusively on English-language services, so services that may exist in other languages are not included in our study.

Using ATLAS.ti Windows (version 23.0.8.0), we carried out a qualitative analysis of text from the relevant sites, assigning codes covering what they do and why they have chosen to do it that way. First one of us (SOH) applied the coding scheme to the dataset. To ensure accuracy in the coding process, codes were then refined and validated through group discussion with the other authors, and consensus was reached regarding any changes, thereby ensuring consistency. Where there was ambiguity in project descriptions, we contacted the service managers to clarify and confirm some details.

As of April 2022, the RR directory listed 35 services for preprint review. For our study, we considered only the services where the process of reviewing preprints was visible. ReimagineReview also includes services that do not publish reviews. We did not consider these services. We finalised the selection and analysis of services for our study in November 2022. The web sites of two preprint review services included in our initial set became unavailable after November 2022. These are the CrowdPeer and the Red Team Marked. As CrowdPeer became unavailable before we finished the analysis, we could not check and describe its main characteristics, as we did for the other services, including the Red Team Marked. Therefore, we decided not to include CrowdPeer in our final set of 23 services for review of preprints. Nevertheless, based on our initial analysis and the information available in the RR directory, we have included some comments on CrowdPeer as an illustration of possible future models of peer review and scholarly communication, where relevant. Red Team Marked was taken out of operation later and so we were able to include it in our analysis. On the other hand, some services have evolved and for these services our analyses, as a snapshot, may not be fully up-to-date. This is the case for Rapid Reviews: COVID-19, which has expanded beyond COVID-19. And so, Rapid Reviews: COVID-19 is now Rapid Reviews\Infectious Diseases, an open-access overlay journal that accelerates peer review of important infectious disease-related research preprints (RR\C19, n.d.).

To provide an overview of the 23 services included in our final set, we first described their main characteristics according to six categories: 1) project name, 2) developed by, 3) scientific disciplines, 4) goals and intentions, 5) peer review approach, and 6) approach to transparency. Under “developed by”, we used three categories: publisher, university or individual researcher or group of researchers. When detailed information was not available online or was available but not sufficient to support our classification, we contacted the service managers, asking them to tell us how they would describe who the project is led by. We used the “other” category for those who did not answer us and where ambiguity remained.

For the definition of scientific disciplines, we grouped all disciplines identified among our set into five categories considering the usual organisation of the scholarly communication landscape: Physical Sciences, Biomedical and Life Sciences, Engineering and Technology, Social Sciences, and Humanities. We considered the All Disciplines category for services hosting any scientific discipline or having a multidisciplinary approach without a clear definition of the disciplines or sub-disciplines included.

To discuss how preprint review services fit into the publishing landscape and how they identify possible future models of peer review and scholarly communication, we then analysed the stated goals, review approach and transparency, using previous studies as a baseline to discuss and validate our findings.

Our use of the four schools of thought framework enabled us to explore how innovation and experimentation promoted by preprint review services can address the journal- based peer review challenges identified by Waltman et al. (2023) and provide possible solutions and future directions. We assessed the contribution of preprint review services against the key issues identified in the four schools of thought framework, considering the stated goals and intentions of each service, their approach to peer review, and their commitment to openness and transparency. As Waltman et al. (2023) state, these schools have different views on the key problems of the peer review system and the innovations needed to address these problems. While the schools may at times complement each other, the differences in their views may also give rise to tensions. Based on this idea, we aim to understand how preprint review services are able to manage possible tensions when bringing together the key issues of different schools.

Finally, we identified possible future models of peer review and scholarly communication that may arise from the experimentation we are currently witnessing.

3. Overview of preprint review services

Our final set, presented in Table 1, consists of 23 services for review of preprints – some developed by newly-established groups, others by pre-existing organisations, such as publishers or universities. Some of them are focused only on reviewing preprints (7 services), others review preprints and journal articles (14) or privately shared manuscripts (5) or other types of scholarly outputs (11). The most represented category of developers is individual researcher or group of researchers (15 services), followed by others (5), publishers (2) and universities (1). The most represented scientific disciplines are all disciplines (12 services) and biomedical and life sciences (8), followed by physical sciences (3), social sciences (3), engineering and technology (2), and humanities (1).

Table 1: Overview of the services for review of preprints
Service name URL Developed by Scientific disciplines Types of outputs
Crowd preprint review https://asapbio.org/crowd-preprint-review Individual researcher or group of researchers Biomedical and Life Sciences Preprints
Episciences https://www.episciences.org/ Other All disciplines Preprints
Journal articles
Other scholarly outputs
Faculty Opinions https://facultyopinions.com/ Individual researcher or group of researchers Biomedical and Life Sciences Journal articles
Other scholarly outputs
Preprints
Hypothesis https://web.hypothes.is/ Individual researcher or group of researchers All disciplines Journal articles
Other scholarly outputs
Preprints
Privately shared manuscripts
Interactive Public Peer Review https://publications.copernicus.org/services/public_peer_review.html Publisher All disciplines Journal articles
Preprints
Peer Community in https://peercommunityin.org/ Individual researcher or group of researchers All disciplines Preprints
Privately shared manuscripts
Peer Library https://peerlibrary.org/ Individual researcher or group of researchers All disciplines Journal articles
Other scholarly outputs
Preprints
Privately shared manuscripts
Peeriodicals https://peeriodicals.com Other All disciplines Journal articles
Other scholarly outputs
Preprints
PeerRef https://www.peerref.com/ Individual researcher or group of researchers All disciplines Preprints
Plaudit https://gitlab.com/Flockademic/plaudit Individual researcher or group of researchers All disciplines Journal articles
Other scholarly outputs
Preprints
preLights https://prelights.biologists.com/ Individual researcher or group of researchers Biomedical and Life Sciences Preprints
PREreview https://prereview.org/ Individual researcher or group of researchers All disciplines Preprints
PubPeer https://pubpeer.com Other All disciplines Journal articles
Preprints
Qeios https://www.qeios.com/ Individual researcher or group of researchers All disciplines Journal articles
Other scholarly outputs
Preprints
Rapid Reviews: COVID-19 https://rapidreviewscovid19.mitpress.mit.edu/ Publisher Biomedical and Life Sciences
Social Sciences
Journal articles
Preprints
Red Team Market https://redteammarket.com/ Individual researcher or group of researchers Social Sciences Journal articles
Other scholarly outputs
Preprints
Privately shared manuscripts
Researchers.One https://researchers.one/ Individual researcher or group of researchers All disciplines Other scholarly outputs
Preprints
Privately shared manuscripts
Review Commons https://www.reviewcommons.org/ Individual researcher or group of researchers Biomedical and Life Sciences Preprints
Scibase https://scibase.co/ Other Physical Sciences; Biomedical and Life Sciences Journal articles
Other scholarly outputs
Preprints
Science Open Reviewed https://science-open-reviewed.com/webapp/ University Biomedical and Life Sciences Engineering and Technology
Social Sciences
Humanities
Journal articles
Preprints
SciPost https://scipost.org/ Other Physical Sciences Journal articles
Other scholarly outputs
Preprints
SciRate https://scirate.com/ Individual researcher or group of researchers Physical Sciences Engineering and Technology Preprints
Sinai Immunology Review Project https://observablehq.com/@ismms-himc/covid-19-sars-cov-2-preprints-from-medrxiv-and-biorxiv Individual researcher or group of researchers Biomedical and Life Sciences Preprints

Table 2 lists the different approaches to peer review transparency and openness taken by the preprint review services. Offering alternative publication options or alternative approaches to quality assessment compared with the traditional journal-based model is a common goal for all services. One of the key motivations in encouraging review of preprints is to enable authors to benefit from having their work reviewed and improved before submission to a journal. This is often seen as having the potential benefit of reducing rounds of journal re-review. Some services also often give credit to reviewers as a way of incentivising participation. Review of preprints is often explicitly said to address the need for quality control of preprints, reinforcing confidence in the use of preprints and making the evaluation of scientific work more efficient and more open and transparent. Transparency is an evident priority in services that implement one or a number of different aspects of open peer review identified by Ross-Hellauer (2017) including “open reports” (where reviewer reports and author responses are published), “open identities” (where the identities of authors and reviewers are known to each other), “open interaction” (direct unmediated exchange between authors and reviewers is enabled), “open participation” (where reviewers can self-nominate and add comments without being invited to do so), amongst others. In some cases, like in Review Commons, transparency applies only to review reports and not the reviewer’s identity. In others, like in SciPost and Peer Community In, reviewers have the option to withhold their identity, becoming visible only to editors or editorial staff. There are two services, Peer Community In and Hypothesis, that support double-anonymous peer review, in which not only reviewers but also authors can withhold their identity.

The services promote new communication patterns regarding quality assessment and reviewer selection (Table 2). Public post-publication reviews, recommendations, comments, free-form commenting or quantitative scores are among the different assessment approaches we identified. When considering the seven services that are restricted to reviewing preprints, the most frequent quality assessment approach is the structured review form (where reviews are input using standardised structured headings). This approach is used by 4 services: PREreview, PeerRef, Review Commons and Sinai Immunology Review Project. In respect of the reviewer’s selection, apart from 2 services (Peeriodicals and Peer Community In), selection of reviewers does not depend on the editor’s decision only. We identified 7 services that allow reviewers to be selected by an editor, the editorial staff, or the community and 2 services where reviewers are selected by an automated tool. There are 12 services where reviewers can be self-nominated. Although the use of an automated tool constitutes an innovative approach to the selection of reviewers, it is not limited to preprint review services. When compared to journal-based peer review, the self- nomination of reviewers is an important innovative aspect of preprint review services, enabling reviewers to get involved in an ostensibly more participative and interactive review process. Self-nomination means that basically anyone can serve as a reviewer on a platform, without intervention from an editor, editorial staff or anyone else. In most cases, reviewers just have to be registered on the platform to make a comment or recommendation, or to endorse a preprint based on a pick and choose format (e.g., the Plaudit endorsement tool).

Some initiatives offer services that go beyond the review process itself. PreLights and PREreview, for instance, invest in reviewer training for early-career researchers and in other initiatives focused on promoting and recognising reviewers. Prelights provides assessments of reviewers (peer review of peer reviews), or recognition for reviewers’ contributions. PREreview organises training workshops centred on issues of equity, diversity, and inclusion. Another focus of some services is reproducibility, a key issue relating to the quality of scientific outputs. The aim of overcoming or mitigating the lack of quality control associated with preprints and of addressing issues of reproducibility is visible in the policies and procedures of some of the services. This is the case for Queios, Scibase and Peer Community In, which request authors to make code and data openly available and then ask reviewers to consider the code and data in their review.

*Information not available or not identified
Table 2: Different approaches to peer review transparency and openness
 

Service

 

Reviewer selected by

 

Peer review format

 

Reviewer identity

Reviewer identity known to Competing interests  

Transparency

Crowd preprint review Self-nominated Annotations Non-anonymous Editor or editorial staff Not included Open identities Open interaction

Open reports Single anonymous

Episciences Editor, editorial staff Free-form commenting Quantitative scores Structured review form Anonymous Non-anonymous Editor or editorial staff or public Checked Open interaction Single anonymous
Open identities Open reports
Faculty Opinions Editor, editorial staff or community Annotations Free-form commenting

Quantitative scores Structured review form

Non-anonymous Public Checked Open identities Open reports
Hypothesis Self-nominated Social annotation Free-form commenting Structured review form Anonymous Non-anonymous Editorial or editorial staff Not included Double anonymous Open identities Open interaction Open participation Open reports

Single anonymous

Interactive Public Peer Review Editor, editorial staff or community Structured review form Free-form commenting Anonymous Non-anonymous Editor or editorial staff Checked Open interaction Open participation

Open reports

Peer Community in Editor Free-form commenting Anonymous Non-anonymous Editor or editorial staff Checked Double anonymous Open identities Open reports

Single anonymous

Peer Library Self-nominated Annotations Non-anonymous Public Not include Open identities Open interaction

Open participation Open reports

Peeriodicals Editor Free-form commenting Structured review form Anonymous Non-anonymous n.a.* n.a.* Open interaction
PeeRef Automated tool Structured review form Non-anonymous Public Displayed Open identities Open interaction

Open report

Plaudit Self-nominated Annotations Quantitative scores Structured review form Non-anonymous Public Not included Open identities Open interaction

Open participation Open reports

preLights Self-nominated Summary
Reasons for selectionSignificance and relevance Free-form commenting
Non-anonymous Public Not include Open identities
PREreview Self-nominated Free-form commenting Structured review form

Score

Anonymous Editor or editorial staff Displayed Open identities Open interaction Open reports

Single anonymous

PubPeer Self-nominated Free-form commenting Anonymous Non-anonymous None Public Not included Open identities Open interaction Open participation Open reports

Single anonymous

Qeios Automated tool or community Structured review form Quantitative scores Free-form commenting Non-anonymous Public Displayed Open identities Open interaction Open participation

Open reports

Rapid Reviews: COVID-19 Editor, editorial staff or community Structured review form Strength of evidence scale Anonymous Non-anonymous Editor or editorial staff Checked Open identities Open reports
Red Team Marked Editor, editorial staff or community Annotations Free-form commenting Structured review form Anonymous Non-anonymous Editor or editorial staff Not included Open identities Open interaction Open reports

Single anonymous

Researchers.One Self-nominated Free-form commenting Anonymous Non-anonymous Editor or editorial staff Not included Open interaction Open participation Open reports
Review Commons Editor, editorial staff or community Structured review form Anonymous Non-anonymous Editor or editorial staff Checked Open reports
Scibase Self-nominated Free-form commenting Structured review form

Scale or rating

Anonymous Non-anonymous Public Not included Open participation
Science Open Reviewed Authors, editor, editorial staff or community Review report Free-form commenting Anonymous Non-anonymous Public Checked Open identities
Scipost Editor, editorial staff or community, self-nominated Free-form commenting Structured review form Anonymous Editor or editorial staff Checked Open participation Open reports
SciRate Self-nominated Free-form commenting Quantitative scores Anonymous Non-anonymous Public Checked Open identities Open interaction

Open participation

Sinai Immunology Review Project Self-nominated Structured review form Summary and grade Non-anonymous Editor or editorial staff

Public

Not included Open identities Open reports

4. Preprint review services through the lens of four peer review schools of thought

We use the four peer review schools of thought recently proposed by some of us (Waltman et al., 2023) as a framework to discuss the developments around preprint review services. Waltman et al. (2023) proposed four schools of thought on innovation in peer review, each of which has a different emphasis on what the key problems are within current peer review systems and what the priorities should be for improving these systems. Below, we present the focus of each school of thought, as previously described by Waltman et al. (2023):

  • The Democracy & Transparency school focuses on making the evaluation of research more democratic and transparent. Concerned that peer review systems are often elitist and opaque, this school advocates broader participation in the review process to increase the accountability of editors and peer reviewers and enable information produced in peer review to be reused by
  • The Quality & Reproducibility school focuses on the role of peer review in evaluating and improving the quality and reproducibility of research. Based on concerns about inconsistent quality associated with peer review processes, this school is interested in innovations in peer review that improve the quality of review reports and published research. Another focal issue for this school is safeguarding research integrity and identifying scientific misconduct.
  • The Equity & Inclusion school focuses on making peer review processes more equitable and inclusive. This school is given impetus particularly by concerns about bias in peer review systems. It emphasises the need for a balanced representation of different groups of researchers in the peer review system to reduce or eliminate biases related to gender, geography, race, ethnicity, etc.
  • The Efficiency & Incentives school focuses on improving the efficiency of peer review processes and the incentives for peer reviewers. This school is concerned about the pressure on the peer review system, which makes it increasingly difficult to find peer reviewers, emphasising the need to increase the efficiency of peer review and to better incentivise reviewers.

We assessed the characteristics of the 23 preprint review services against the key issues raised by the four schools of thought, considering the stated goals and intentions of each service, their approach to peer review, and their commitment to openness and transparency. Based on those criteria, we identified how each service could be associated with each of the four schools of thought. This gives us insight into key issues, such as where the priorities of the different services lie, how they are managing apparent tensions between different aims, and to what extent they are delivering on their objectives. These designations are discussed in more detail in the following sub-sections.

4.1 Making peer review more democratic and transparent

Calls for more transparent and open peer review approaches are a common feature of discussions about peer review. Recently, Kelly-Ann Allen et al. (2022) presented the lack of transparency as one of the three main challenges of peer review, together with the exploitative nature and the slowness of peer review. Based on a wide-ranging conversation on X (the social-media platform previously known as Twitter), they identified the need to open the “black box” of the current peer review system as the most constructive way to “fix peer review”. According to Fernandez-Llimos (2023) to minimise the peer review crisis, we should start recognising peer reviewers by acknowledging their contribution in every single paper they reviewed, and we can do this most effectively in open peer reviews.

By assuming a commitment to openness and transparency, preprint review services are already arguably trying to “fix peer review”, challenging authors, reviewers, editors, and publishers to rethink and adjust their practices. Openness and transparency are common goals for all services included in our study, meaning that they all support the ideas of the Democracy & Transparency school. All services in our study provide open review reports and promote open interactions among authors, reviewers and users.

Still, Table 2 presents several variations in the peer review approach, selection of reviewers, management of their identities, transparency and openness. It is evident that transparency and openness have different meanings and are applied differently by different services. Managing reviewers’ identities is perhaps one of the biggest challenges the services face. As highlighted by Chloros et al., (2023) open peer review involving open identities enables better recognition of the contribution reviewers make. However, for many researchers, revealing their identity may raise concerns, like the fear of retaliation, and this may result in them providing less critical feedback. In Section 5, we discuss how preprint review services deal with this tension and how they balance transparency and openness with the interests and concerns of reviewers.

4.2 Improving quality and reproducibility

The Quality & Reproducibility school emphasises the crucial role of peer review in improving and certifying the quality of scientific work. A focus on quality is evident in preprint review services that train reviewers, evaluate reviewers’ work, check for competing interests, and pay attention to ethics and integrity. As can be seen in Table 2, in addition to unstructured reports, preprint review services also use structured review forms, quantitative scores, and scales or ratings, showing the broad range of approaches taken by preprint review services to assess and improve the quality of scientific work. Many preprint review services have adopted ideas of the Quality & Reproducibility school, thereby showing their commitment to rigorous preprint review and positioning themselves as promoters of high-quality and reproducible research.

Faculty Opinions for instance states that the “caliber” of its reviewers and the “rigour” of the validation they provide ensure the quality of the research recommended by them.

At Peer Community In, reviewers evaluate preprints in their thematic fields based on “rigorous peer review” and “conflicts of interests are carefully checked at each step of the process”. Plaudit considers that “publisher-independent endorsements” provided by “known and trusted academic community members provide credibility for valuable research”.

High-quality standards in peer review are often highlighted as essential for improving research quality and reproducibility (e.g., Chambers & Tzavella, 2022). As the attention given to the lack of reproducibility in research grows (e.g., Brendel, 2021; Errington et al., 2021), improving the reproducibility of research is increasingly seen as one of the responsibilities of peer review. This is reflected in the goals and intentions of some preprint review services and their policies and procedures. Examples of services for which this is the case are Scibase, Qeios and Peer Community In.

SciBase states that “science has a reproducibility problem” and argues that “only through honest public discussion can post-publication review become part of the scientific process, meaning that the reviewer’s identity is known to the public.” Reproducibility is one of the dimensions evaluated in SciBase’s review approach, which combines a report and an approval status (scale or rating). The overall rating provided by a review is a weighted average of the ratings of each of the individual dimensions, with the reproducibility dimension having the highest weight (25%). The other dimensions are logic/design, impact, transparency, clarity and versatility (15% each). SciBase also enables users to rate reviews themselves on a 1-5-star scale, with the intention of promoting transparency, rigour, accountability and reviewer recognition.

With the aim of improving quality and reproducibility, some services, like Qeios and Peer Community In, require authors to make all study data, digital materials, and computer code publicly available at the time of submission to the maximum extent permissible by the relevant legal or ethical restrictions. Scripts, simulation code, software parameters, etc., are prerequisites for submission. Also, reviewers are asked to check that authors provide sufficient details for the methods and analysis to allow replication by other researchers, considering statistical analyses and consistency of raw data and scripts, among other critical issues. As Peer Community In states, with these prerequisites, it aims to “establish a high-quality, free, public system for identifying high-quality preprints, after rigorous peer-review.” In addition to these examples, other services like Hypothesis and Peer Library make guidelines, recommendations or other support documents available to reviewers.

4.3 Making peer review processes more equitable and inclusive

Journal-based peer review is often perceived to be susceptible to disparities, inequities, and bias (e.g., Bancroft et al., 2022). The need to make peer review processes more equitable and inclusive is the focus of the Equity & Inclusion school. Supporters of this school often emphasise the need for a balanced representation of different groups of researchers in the peer review system to avoid biases related to gender, geography, race, ethnicity, etc.

An increasing amount of attention is paid to underrepresented groups in publication and peer review processes, aiming to contribute to diversity, equity, and inclusion (Royal Society of Chemistry, n.d.). Increasing diversity amongst editors, reviewers and authors is starting to be reflected in some editorial policies and practices. A key aim is to address biases in peer review against underrepresented groups. Concerns around peer review bias highlight the complexity of social interactions among actors involved in peer review and raise questions about the nature of the various forms of bias discussed in the literature (Lee et al., 2013).

An important example of a preprint review service explicitly addressing inequities and biases, with special attention to reviewers, is PREreview. By providing opportunities for traditionally marginalised research communities to get involved, train, connect, and be recognised for their contributions to scholarship, PREreview aims “to bring more equity and transparency to scholarly peer review.” At the same time, with the stated aim of helping researchers think deeply about how assumptions or biases may affect their assessment of manuscripts, PREreview developed the Bias Reflection Guide (Foster et al., 2021). This tool reflects the perspective of the Equity & Inclusion school. This guide is provided to help students and researchers to make their approach to peer review more inclusive and less biased.

PREreview enables reviewers to publish reviews with their public names (connected to ORCIDs) or pseudonyms provided by the platform. This is intended to allow reviewers to critique the work they review without fearing negative consequences, particularly if the author is more senior than the reviewer. In this way, ‘PREreviewers’, particularly those belonging to vulnerable communities, may “contribute to open preprint review without fearing retribution or judgment that may occur when their full name is attached to the feedback, whilst retaining an element of accountability”. From the point of view of reviewers, protecting their identities can be expected to reduce the likelihood of bias based on factors such as gender, ethnicity, or institutional affiliation. Arguably, it protects reviewers against retaliation and encourages rigorous and honest feedback.

Preprint review services take several different approaches to handle issues related to the identity of reviewers. Even for services that invite reviewers to embrace “full transparency (i.e., critics’ names and their criticisms are made publicly available)”, as Read Team Marked does, revealing one’s identity is not always mandatory (Table 2). Reviewers’ identities may be protected if they wish. Some services enable reviewers to choose whether they want to sign their review reports, others protect the identity of reviewers with a nickname or a pseudonym. Another approach is taken by Crowd preprint reviews, which aggregates the comments of different reviewers in a final synthesised review that is “publicly posted, without linking specific comments to the commenter.” Rapid Reviews: COVID-19 (RR:C19) emphasises global diversity. It works with a “global team, including board members and reviewers from all regions”, and reviewers can be anonymised upon request.

Another approach promoted by the Equity & Inclusion school is double-anonymous peer review, in which not only reviewers but also authors are anonymous. This approach is hard to implement in the context of preprint review. Nevertheless, some services make an effort to facilitate double-anonymous peer review in special situations. We will discuss this in more detail in Section 5.2.

4.4 Improving peer review efficiency and reviewer incentives

Peer review is often seen as overloaded with a growing burden on reviewers, editors, and other actors in the system, contributing to substantial delays in publication (e.g., Nguyen et al., 2015; Okuzaki et al., 2019), and there is a lot of discussion about faster and more efficient approaches to peer review. Reducing inefficiencies in peer review constitutes a big challenge for journals, where the expectation typically is that a separate review process is carried out for each submission to each journal, including submissions previously rejected by other journals. By providing journal-independent portable review, preprint review services may reduce re-reviewing at multiple journals and increase the efficiency of peer review. On the other hand, depending on the extent to which journals are willing to adjust their processes and consider re-using preprint reviews, we might ask whether preprint review initiatives are, in fact, adding to an already overburdened system by creating even more work? In Section 6, we address these questions in more detail.

With regard to portable peer review, ten years ago, Swoger, (2013) argued that rejection takes time and energy from authors, reviewers and editors, and that portable peer review reduces re-reviewing at multiple journals, increasing the efficiency of the process. Swoger pointed out the role that new “portable peer review” services, independent of specific journals, could play in cutting down on redundant work by separating the review process from the publication process.

This illustrates one of the key issues emphasised by the Efficiency & Incentives school, which focuses on improving the efficiency of peer review processes and the incentives for peer reviewers. This school is concerned about the pressure on the peer review system, in which it is increasingly difficult to find peer reviewers. Approaches to make review processes more efficient and reduce re-reviewing at multiple journals are facilitated by various preprint review services, like Rapid Reviews: COVID-19, PeerRef and Review Commons, together with initiatives to incentivise researchers by making their work more visible and recognised. Nevertheless, according to the experience reported by the Rapid Reviews: COVID-19 service, so far, there is no hard evidence that journals would want to take into account their reviews. Despite the effort made to balance rapidity and rigour and making peer review more efficient, not all journals are receptive to establishing formal relationships with preprint review services, to have access to a feed of reviews that could supplement their review process.

Preprint peer review services may also address problems arising from the “reputation economy” for reviewers (Swoger, 2013). CrowdPeer, for instance, allowed reviewers “to build their reputations within the community and benefit professionally.” By implementing a reviewer evaluation system, in which the quality of a reviewer’s work is evaluated, CrowdPeer brought together key issues from the Democracy & Transparency school (accountability) and the Efficiency & Incentives school (recognition). Another example is preLights. By providing a “platform where early- career researchers can practise their scientific writing and reviewing skills,” this service aims to raise the profiles of early-career researchers as “trusted preprint selectors and commentators.” preLights provides recommendation letters to support early-career researchers.

The attribution of a DOI (Digital Object Identifier) to public review reports makes the reviewer’s work fully citable and claimable as a scientific output, and promotes recognition of reviewers. This is another way to help reviewers to gain recognition for their efforts. It is implemented by several services, like Rapid Reviews: COVID-19, Interactive Public Peer Review, and preLights. In addition, Rapid Reviews: COVID-19 and Plaudit link researchers, identified by their ORCID, to the research they have evaluated, identified by a DOI.

For Science Open Reviewed, a “paid service [for reviewers] combined with published reviewer acknowledgement” may promote higher quality reviews, provide “fair and motivating compensation” for professional peer review services and “minimise reviewer bias and promote greater reviewer accountability.”

5. Managing tensions between schools of thought: Anonymisation of reviewers and authors

In some cases, key priorities of different peer review schools of thought can be achieved alongside each other by the same preprint review service. For example, the Efficiency & Incentives school and the Democracy & Transparency school seem to complement each other in various ways. It may therefore be relatively easy for a preprint review service to embrace the ideas of both of these schools. This is, for instance, illustrated by CrowdPeer, although interestingly, this service is no longer operational. By providing “open review of preprints and engagement of a diverse group of reviewers,” it aligned with the Democracy & Transparency school, making the evaluation of research more democratic and transparent. At the same time, CrowdPeer also aimed to improve the efficiency of review processes by providing “a universal structure for reviews to standardise the review process” and providing an “educational tool for reviewers.” In that way, CrowdPeer aimed to reduce the pressure on the review system, in line with the Efficiency & Incentives school.

In other cases, however, there are tensions between different peer review schools of thought, reflecting different perspectives on how peer review can best be organised. As we discuss in the next sub-sections, managing the identities of reviewers and authors is one of the biggest challenges preprint review services faces.

5.1 Anonymisation of reviewers

The Equity & Inclusion school promotes anonymity of reviewers, as we have seen. It sees anonymity as a way to protect reviewers against possible negative consequences of critical reviews. Especially researchers who are in an early career stage or who otherwise find themselves in a vulnerable position may fear such negative consequences. Anonymity makes it easier for these researchers to perform peer review and therefore arguably contributes to more inclusive approaches to peer review.

The Quality & Reproducibility school may also be supportive of anonymity of reviewers, based on the rationale that anonymity enables reviewers to be more frank and therefore results in higher-quality peer review. However, anonymity of reviewers is in tension with the ideas of the Democracy & Transparency school. This school prefers to organise peer review as an open dialogue in which authors and reviewers participate on an equal basis, with each party accountable for their contributions. Organising peer review in this way is difficult or even impossible when reviewers are anonymous.

As shown in Table 2, with respect to transparency and openness, preprint review services provide different options. To balance transparency and protect reviewers’ interests, some services (13) enable reviewers to decide whether to sign their reviews, becoming anonymous or non-anonymous to the public. Others (8) require a non- anonymous review. In addition to the option of signing or not signing reviews, another solution already mentioned is to protect reviewers under a nickname or a pseudonym. This approach is a way in which preprint review services manage tensions between peer review schools of thought, in particular between the Democracy & Transparency and the Equity & Inclusion schools. PREreview is explicit that it aims to bring “more equity and transparency to scholarly peer review by supporting and empowering communities of researchers, particularly those at early stages of their career”, but in doing so, it arguably also limits transparency. For Hypothesis, enabling reviewers to use a pseudonym promotes “transparency and credibility without public identity,” although it is evident that they are managing a tension between these two aims. In relation to reviewers’ identity, these services attempt to balance the idea of transparency and the accountability it brings with protections for reviewers where they might experience negative consequences if their identities were disclosed.

Anonymity of reviewers also poses a challenge for the Efficiency & Incentives school. To incentivise reviewers, this school emphasises the importance of giving recognition to reviewers. However, when reviewers are anonymous, it is difficult to publicly recognise them for the efforts they make, other than to say a particular reviewer has reviewed for a particular service without any more detail than that. In particular, associating reviewers with the text of the review, and recognising their contribution at that level of detail, becomes difficult. In addition, anonymity of reviewers may also complicate the reuse of reviews, making it difficult for editors to recognise the competency and credibility of reviewers, thereby undermining the desire of the Efficiency & Incentives school to make peer review more efficient.

This highlights how complex and challenging it can be for a preprint review service to manage tensions between different priorities, like transparency, rigour, recognition, and equity and inclusion, with tensions related to anonymity in peer review being among the most challenging ones.

5.2 Anonymisation of authors

For the Democracy & Transparency and the Efficiency & Incentives schools, performing peer review after publication, not before, offers an important way to enhance the transparency and efficiency of peer review. Hence, these schools take a critical stance towards double-anonymous approaches to peer review, in which not only reviewers but also authors are anonymous. Anonymity of authors is incompatible with the idea of performing peer review after publication. On the other hand, the Equity & Inclusion school supports double anonymity, arguing that anonymity of authors reduces bias in peer review and therefore contributes to making peer review more equitable and inclusive. As already pointed out, the Quality & Reproducibility school may perceive reducing bias as a way to improve the quality of peer review and may therefore also support double anonymity.

In our set of preprint review services, there are two, Peer Community In and Hypothesis, that state they offer double-anonymous peer review (Table 2). For Peer Community In, double-anonymous peer review is not a regular practice but may happen exceptionally by request to fulfil the author’s needs. To submit an article anonymously to Peer Community In, the authors must use a private web-based interface (e.g., Google Docs, Dropbox or GitHub). Data, scripts and codes must also be made available through a private web-based interface. In this way, the names of the authors do not need to be disclosed to the reviewers evaluating the article, making it possible to have a double-anonymous evaluation. If the article is eventually recommended by the recommender overseeing the peer review process, the authors must deposit a final, non-anonymised version of their article in an open archive.

Although Hypothesis also claims to facilitate double-anonymous peer review, it is not clear to us how this is actually done. Double anonymity is essentially incompatible with preprint review. Still, although it is less transparent, it may be argued to be more inclusive because it attempts to avoid biases related to gender, geography, race, ethnicity, etc. This is clearly a compromise – an attempt to manage the tensions between the drive for transparency, characteristic of preprinting, with that of inclusiveness. By offering double-anonymous evaluation of an article, Peer Community In and Hypothesis offer a peer review process that is very similar to traditional closed journal-based peer review, making us question if one should refer to this as preprint review.

6.   How do preprint review services fit into the publishing landscape?

Preprint review services typically position themselves as complementary to journal- based peer review. The idea of being a “complementary service with no intention of competitiveness” (Peer Community In) is present in the stated aims of almost all preprint review services, which for instance, make the argument that they enable researchers to “innovate and explore new approaches to scientific dissemination, in parallel to the traditional publishing industry” (Peeriodicals). PeerRef states its aim to “provide researchers with greater choice in how their research is shared and evaluated, and eliminate the need for repeated peer review in successive journals” is a key way in which this complementarity works. Some services implement their complementary role in the system by working directly with affiliate, friendly or partner journals, including the emerging overlay journals model (Rousi & Laakso, 2022), such as the ones hosted by Episcences. Other services support journals in more indirect ways, for instance by helping editors find “new research they may not have otherwise discovered” (Rapid Reviews: COVID-19) and reducing re-reviewing. In this way, preprint review services arguably help to organise peer review in more efficient ways.

However, the success of preprint review initiatives as a complementary service to journal-based peer review depends on the extent to which journals are going to adjust their processes and are going to make use of preprint reviews. According to Saderi and Greaves (2021), editors may be open to these new developments only under certain conditions. To better understand if and how preprint reviews may help make journal- based peer review processes faster and more efficient, Saderi and Greaves asked editors of the journals involved in the COVID-19 Rapid Review (OASPA, 2020) initiative to fill out a short survey. Based on a small number of responses, they concluded that any progress towards an integrated workflow between preprint reviews and journal- based peer review would take a coordinated effort by different stakeholders and that this effort must focus on building mutual trust: on the one hand, the trust by editors in preprint reviews and, on the other hand, the trust by authors and reviewers that their contributions will be valued and recognised.

If journals are not interested in making use of preprint reviews, preprint review services arguably leave themselves open to the criticism that they put even more pressure on an already overburdened system by adding another layer of review on top of journal- based peer review. Rather than developing ways to improve publishing and peer review workflows, the services might be seen as unnecessarily adding further complexity to the system. The viability of preprint review services may depend on how they address this criticism – for example by arguing that they offer alternative publication options to authors and help to turn peer review into a more rewarding experience for authors and reviewers.

Nevertheless, while almost all preprint review services we examined ostensibly aim to supplement the current journal-based publishing system, some also identify the possibility of more radical change, aiming for more ambitious reforms of scientific publishing, considering that “the quality of published work must stand on its own, without the crutch of impact factors, journal prestige, `likes’, `thumbs up’, or the artificial stamp of approval signalled by the label “peer review” (Researchers.One), or to relieve the pressure on authors to publish their work in ’top-tier’, but often paywalled journals” (Plaudit). Despite some resistance from editors to partner with preprint review services, some journals are already exploring review of preprints by themselves, or partnering with preprint review services. This raises a key question: Should preprint review services just be seen as a complement to journal-based publishing, or could they potentially offer an alternative to journal-based publishing?

At present, preprint review services seem to opt for a cautious approach, avoiding a radically disruptive position that could compromise their future development. However, after this first stage of experimentation, the increasing maturity of preprint review services and the involvement of and recognition from the research community may possibly enable these services to position themselves as an alternative, rather than a complement, to journal-based publishing. As discussed by Kramer et al. (2022), the dissemination function of journals may be taken over by preprint servers (although the term ‘preprint’ may no longer be appropriate), while preprint review services may take over the evaluation function of journals. In the most extreme outcome, conventional journals may cease to exist, and scientific publishing may take place entirely on preprint servers and preprint review platforms.

Edmunds (2022), editor in chief at GigaScience Press, argues that as standalone movements, the several models and experiments developed to improve the journal system, “were a harder sell, but now, working together, they interact and support each other synergistically”. Like Edmunds, we believe that preprint review services offer important opportunities to improve the publishing process by improving transparency, accountability, efficiency and speed. Moreover, Edmunds highlights that “research culture has also been changing, with younger researchers seeing transparency and openness as a norm, and funders and publishers endorsing and promoting such efforts”. Edmunds is probably right that “time seems to have finally arrived” for preprinting combined with open and portable peer review.

7. Conclusions

To provide a systematic understanding of the main characteristics of preprint review services, we conducted a descriptive study of 23 services. We examined how these services have been established to manage preprint review and how they position themselves within the broader publishing landscape, with the aim of addressing the following research questions:

RQ1 – How do preprint review services differ from, and what do they have in common with traditional peer review?

RQ2 – How are preprint review services linked with the four peer review schools of thought on innovation in peer review?

RQ3 – How do preprint review services manage connections between the four peer review schools of thought when adopting ideas from multiple schools?

RQ4 – How can preprint review supplement journal-based peer review or offer an alternative for journal-based peer review?

In addressing RQ1, our analysis identified both differences and commonalities between preprint review and traditional peer review. Preprint review services have the potential to address a number of key issues in scholarly communication and peer review. They tend to diverge from traditional models by emphasizing openness, transparency, and flexibility, while maintaining the shared objective of ensuring research quality and credibility. By providing quality control, preprint review services can enrich the value of preprints, reinforce confidence in their use, and make the evaluation of research more efficient and more open and transparent. In addition, preprint review services may improve accountability and recognition of reviewers, supplement journal-based peer review, and reduce the need for re-submission and review at multiple journals.

In relation to RQ2, we used the four peer review schools of thought framework (Waltman et al., 2023) to explore how innovation and experimentation promoted by preprint review services can address the challenges of traditional journal-based peer review and provide possible solutions and future directions. Our analysis gave us insight into key issues, such as where the priorities of the different services lie and how they manage tensions between schools, reflecting different perspectives on how peer review can best be organised.

In addressing RQ3, we found that many services draw upon ideas from multiple peer review schools of thought and need to manage tensions between them. The most important tension that we identified relates to anonymisation of reviewers and authors. IIn line with the ideas of the Democracy & Transparency school, preprint review services promote more open forms of peer review in which authors and reviewers participate on a more equal basis. However, from the perspective of the Equity & Inclusion school, this raises concerns. To make peer review processes more equitable and inclusive, this school emphasises the importance of enabling anonymisation of reviewers and possibly also authors, which is in tension with the focus on openness and transparency of preprint review services.

The scholarly communication landscape seems to be moving toward a mixed system in which preprint servers, preprint review services and journals operate mostly in complementary ways. In response to RQ4, our findings show that preprint review services have the potential both to supplement and to provide an alternative to journal- based peer review. Preprint review services currently seem to avoid a radically disruptive position. Still, after a period of experimentation, their increasing maturity may enable these services to position themselves as an alternative, rather than a complement, to traditional journal-based publishing. In the longer term, preprint review services may, therefore, disrupt the scholarly communication landscape in a more radical way.

Despite the potential to either complement existing scholarly communication services or even replace them, it is apparent that at present many preprint review services face problems of sustainability. During the period of our analysis, two services, CrowdPeer and Red Team Marked, were taken out of service. Whilst the reasons for their closure are not entirely clear, it is obvious that many of the services we studied are built with small budgets and have uncertain funding streams. They typically rely on the enthusiasm and commitment of volunteers. Achieving financial stability will be a challenge for many of these services, particularly if they wish to maintain the independence from large commercial interests, which many of them currently have.

Acknowledgements

We are grateful to the managers of preprint review services that kindly contributed to our work by clarifying and confirming some details and adding relevant additional information. We thank Jessica Polka from Crowd preprint review, Raphaël Tournoy from Episciences, Natascha Töpfer from Interactive Public Peer Review, Denis Bourguet from Peer Community In, Mitar from Peer Library, Elliott Lumb from PeerRef, Reinier Prosée from preLights, Daniela Saderi from PREreview, Gabriele Marinello from Qeios, Nick Lindsay from Rapid Reviews: COVID-19, Ryan Martin from Researchers.One and Aram Harrow from SciRate. We also extend our gratitude to Stephen Gabrielson, Dibyendu Roy Chowdhury, Ashley Farley, and Gary McDowell of the ASAPbio Meta-Research Crowd for their insightful comments on an earlier version of this paper (https://prereview.org/reviews/10210714).

Author contributions

Conceptualization: SOH, NR, SP, LW; Data curation: SOH; Formal analysis: SOH; Funding acquisition: SP, LW; Investigation: SOH; Methodology: SOH, SP, LW; Project administration: SOH, SP, LW; Supervision: SP, LW; Validation: SOH, NR, SP, LW; Visualization: SOH; Writing – original draft: SOH; Writing – review & editing: SOH, NR, SP, LW

Competing interests

Stephen Pinfield and Ludo Waltman are involved in MetaROR (MetaResearch Open Review), a platform for open peer review of preprints in the field of metaresearch.

Funding information

Stephen Pinfield and Ludo Waltman were supported by Wellcome [221297/Z/20/Z] as part of its core funding of the Research on Research Institute (RoRI).

Data availability

The data that support the findings of this study are openly available in figshare.

https://doi.org/10.6084/m9.figshare.24307312.v3

References

ASAPbio. (n.d.). ReimagineReview – A registry of platforms and experiments innovating around peer review. ASAPbio. https://reimaginereview.asapbio.org/

Avissar-Whiting, M., Belliard, F., Bertozzi, S. M., Brand, A., Brown, K., Clément- Stoneham, G., … & Williams, M. (2024). Recommendations for accelerating open preprint peer review to improve the culture of science. PLoS Biology, 22(2), e3002502. https://doi.org/10.1371/journal.pbio.3002502

Bancroft, S. F., Ryoo, K., & Miles, M. (2022). Promoting equity in the peer review process of journal publication. Science Education, 106(5), 1232–1248. https://doi.org/10.1002/sce.21733

Berg, J. M., Bhalla, N., Bourne, P. E., Chalfie, M., Drubin, D. G., Fraser, J. S., Greider, W., Hendricks, M., Jones, C., Kiley, R., King, S., Kirschner, M. W., Krumholz, M., Lehmann, R., Leptin, M., Pulverer, B., Rosenzweig, B., Spiro, J. E., Stebbins, M., … & Wolberger, C. (2016). Preprints for the life sciences. Science, 352(6288), 899–901. https://doi.org/10.1126/science.aaf9133

Blatch-Jones, A. J., Recio Saucedo, A., & Giddins, B. (2023). The use and acceptability of preprints in health and social care settings: A scoping review. PLoS ONE, 18(9), e0291627. https://doi.org/10.1371/journal.pone.0291627

Brendel, K. (2021). Time to assume that health research is fraudulent until proven otherwise? BMJ. https://blogs.bmj.com/bmj/2021/07/05/time-to-assume-that-health-research-is-fraudulent-until-proved-otherwise/

Chambers, C. D., & Tzavella, L. (2022). The past, present and future of Registered Reports. Nature Human Behaviour, 6(1), 29–42. https://doi.org/10.1038/s41562-021-01193-7

Chloros, G. D., Konstantinidis, C. I., Vasilopoulou, A., & Giannoudis, P. V. (2023). Peer review practices in academic medicine: How the example of orthopaedic surgery may help shift the paradigm? International Orthopaedics, 47(5), 1137– 1145. https://doi.org/10.1007/s00264-023-05729-6

Corker, K. S., Waltman, L., & Coates, J. A. (2024). Understanding the Publish-Review- Curate (PRC) model of scholarly communication. Zenodo. https://doi.org/10.5281/zenodo.13909969

Edmunds, S. (2022). Preprints and open peer review come of age. Research Information. https://www.researchinformation.info/analysis-opinion/preprints-and-open-peer-review-come-age

Errington, T. M., Mathur, M., Soderberg, C. K., Denis, A., Perfito, N., Iorns, E., & Nosek, B. A. (2021). Investigating the replicability of preclinical cancer biology. eLife, 10, e71601. https://doi.org/10.7554/eLife.71601

Fernandez-Llimos, F. (2023). Comment on the article: “Peer review practices in academic medicine: how the example of orthopaedic surgery may help shift the paradigm?” International Orthopaedics, 47(5), 1391–1392. https://doi.org/10.1007/s00264-023-05768-z

Foster, A., Hindle, S., Murphy, K. M., & Saderi, D. (2021). Open reviewers bias reflection guide. Zenodo. https://doi.org/10.5281/zenodo.5484052

Fraser, N., Brierley, L., Dey, G., Polka, J. K., Pálfy, M., Nanni, F., & Coates, J. A. (2021). The evolving role of preprints in the dissemination of COVID-19 research and their impact on the science communication landscape. PLOS Biology, 19(4), e3000959. https://doi.org/10.1371/journal.pbio.3000959

Henriques, S. O., Rzayeva, N., Pinfield, S., & Waltman, L. (2023). Data Set – Preprint review services: Disrupting the scholarly communication landscape? figshare. Dataset. https://doi.org/10.6084/m9.figshare.24307312.v3

Lee, C. J., Sugimoto, C. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the American Society for Information Science and Technology, 64(1), 2–17. https://doi.org/10.1002/asi.22784

Lutz, J. F., Sondervan, J., Edig, X. van, Freeman, A., Kramer, B., & Rosenkrantz, C. H. (2023). Knowledge Exchange analysis report on alternative publishing platforms. Alternative Publishing Platforms. https://doi.org/10.21428/996e2e37.3ebdc864

Nguyen, V. M., Haddaway, N. R., Gutowsky, L. F. G., Wilson, A. D. M., Gallagher, A. J., Donaldson, M. R., Hammerschlag, N., & Cooke, S. J. (2015). How long is too long in contemporary peer review? Perspectives from authors publishing in conservation biology journals. PLOS ONE, 10(8), e0132557. https://doi.org/10.1371/journal.pone.0132557

Ni, R., & Waltman, L. (2024). To preprint or not to preprint: A global researcher survey. Journal of the Association for Information Science and Technology, 75(6), 749– 766. https://doi.org/10.1002/asi.24880

OASPA. (2020). COVID-19 Publishers open letter of intent – rapid review. OASPA. https://oaspa.org/covid-19-publishers-open-letter-of-intent-rapid-review/

Okuzaki, Y., Nakamura, S., & Nakaoka, M. (2019). Toward more rapid and efficient peer review: A case study on publication in Ecological Research. Ecological Research, 34(5), 563–574. https://doi.org/10.1111/1440-1703.12037

Reardon, J., Crawford, J., Allen, K.-A., & Walsh, L. (2022, July 25). The peer review system is broken. We asked academics how to fix it. The Conversation. https://theconversation.com/the-peer-review-system-is-broken-we-asked-academics-how-to-fix-it-187034

Ross-Hellauer, T. (2017). What is open peer review? A systematic review. F1000Research, 6, 588. https://doi.org/10.12688/f1000research.11369.2

Rousi, A. M., & Laakso, M. (2022). Overlay journals: A study of the current landscape. Journal of Librarianship and Information Science, 56(1), 15–28. https://doi.org/10.1177/09610006221125208

Royal Society of Chemistry. (n.d.). Joint commitment for action on inclusion and diversity in publishing. Royal Society of Chemistry. https://www.rsc.org/policy-evidence-campaigns/inclusion-diversity/joint-commitment-for-action-inclusion-and-diversity-in-publishing/

RR\C19. (n.d.). Rapid Reviews Infectious Diseases. Retrieved 19 July 2024, from https://rrid.mitpress.mit.edu/rrc19

Rzayeva, N., Pinfield, S., & Waltman, L. (2025). Adoption of preprinting across scientific disciplines and geographical regions (1991-2023). SocArXiv. https://doi.org/10.31235/osf.io/xdwc4_v2

Saderi, D., & Greaves, S. (2021). Using preprint reviews to drive journal peer review. ASAPbio. https://asapbio.org/using-preprint-reviews-to-drive-journal-peer-review

Swoger, B. (2013). Can you take it with you when you go? Portable peer review. Scientific American Blog Network. https://blogs.scientificamerican.com/information-culture/can-you-take-it-with-you-when-you-go-portable-peer-review/

Waltman, L., Kaltenbrunner, W., Pinfield, S., & Woods, H. B. (2023). How to improve scientific peer review: Four schools of thought. Learned Publishing, 36(3), 334–347. https://doi.org/10.1002/leap.1544

Editors

Kathryn Zeiler
Editor-in-Chief

Kathryn Zeiler
Handling Editor

Editorial Assessment

by Kathryn Zeiler

DOI: 10.70744/MetaROR.300.1.ea

In the last review, the reviewers and editor agreed that the original manuscript represented a valuable contribution by presenting a descriptive analysis of preprint review services with a focus on the services’ relative characteristics and differences in preprint review management. The manuscript has been improved to address the limitations noted in that last round of review. One reviewer, however, continues to point to the lack of evidence for the authors’ claim that the discipline categorization is “commonly accepted.”

Competing interests: Kathryn Zeiler is co-Editor-in-Chief of MetaROR. Authors Ludo Waltman and Stephen Pinfield are also MetaROR editorial members.

Peer Review 1

Pen-Yuan Hsing

DOI: 10.70744/MetaROR.300.1.rv1

Many thanks to the authors for sharing a revised version of this paper, engaging with reviewer responses, and the various updates. I believe it is an important contribution to the discourse around preprints and preprint review, serving as a foundation for future metaresearch on this topic and informing advocacy. Thanks again for the opportunity to engage with this excellent paper.

Looking to the future, I re-iterate the importance of being sensitive in our language to disciplinary and epistemic diversity. I disagree with the claim that the “five broad areas” the authors used to categorise disciplines is “commonly accepted”. Where is the evidence for this? And even if it is common, that is not an adequate justification in itself. For example, there will be many researchers in the humanities who will not agree that they belong in a “scientific” discipline. Some will be deeply offended, and justifiably so. These considerations are especially critical in metaresearch, as we often make normative statements about the nature of research and must be very mindful of our implicit biases. Words matter!

Competing interests: None.

Peer Review 2

Ross Mounce

DOI: 10.70744/MetaROR.300.1.rv2

I am really pleased with how the authors have responded to my review and the review provided by Pen-Yuan Hsing.

Whilst I note the authors opted not to make one of my suggested stylistic changes to a table, it was just a suggestion of mine, not a strongly held position. It’s good that authors respectfully hold their own against the opinions of reviewers sometimes 🙂
The description of the work performed in this research has been improved and as a reviewer I am satisfied.
I thank the MetaROR team for giving me, as a reviewer, a chance to respond to the changes in the manuscript and the author responses to the initial round of reviews. Whilst there is not much to say in this second round of review, I appreciate the thoroughness in offering it.

Competing interests: I am Director of Open Access at Arcadia.

Leave a comment