Published at MetaROR
September 23, 2025
Table of contents
Peer review at the crossroads
1. Centre for Science and Technology Studies, Leiden University, Leiden (Netherlands)
2. Department of Probability Theory and Cybersecurity, RUDN University, Moscow (Russian Federation)
3. Institute for the Study of Science, Russian Academy of Sciences, Moscow (Russian Federation)
Originally published on August 3, 2025 at:
Abstract
Peer review has long been regarded as a cornerstone of scholarly communication, ensuring high quality and credibility of published research. Although academic journals trace their origins back three centuries, the procedures for evaluating submissions, particularly peer review, have undergone continuous evolvement. Peer review’s formal institutionalization in the mid-20th century represents a significant, yet natural, phase in this ongoing transformation of scholarly communication. By the early 21st century, there emerged an opinion that the conventional model of peer review faces systematic challenges, including inefficiency, bias, and institutional inertia. The study aims to synthesize the evolution, practices, and outcomes of both conventional and innovative peer review models in scholarly publishing. Through a mixed-methods approach combining interpretative literature review and process modeling (Business Process Model and Notation –BPMN), it identifies four frameworks: pre-publication peer review, registered reports, modular publishing, and the Publish-Review-Curate (PRC) model. While the PRC model, which integrates preprints with post-publication review, demonstrates advantages in transparency and accessibility, no single approach emerges as universally ideal. The choice of model depends on disciplinary context, resource availability, and institutional priorities. The analysis underscores the need for adaptable platforms that enable hybrid workflows, balancing rigor with inclusivity. Future research must address empirical gaps in evaluating these innovations, particularly their long-term impact on equity and epistemic norms.
This paper was previously peer reviewed at the MetaROR platform:
Key adjustments in this version:
-
Streamlined historical framing to serve as background context only
-
Added explanation explicitly justifying interpretative approach
-
Minor edits
1. Introduction
Peer review in scholarly communication refers to feedback provided by researchers (peers) on a specific study. It is a defining feature of academic journals, distinguishing them from popular or professional publications. Traditionally, peer review is meant to ensure the quality of scientific research, increasing the level of trust within the academic community and among funders (G. D. Smith & Jackson, 2022). Peer review helps to identify and correct errors in scientific studies that may lead to flawed conclusions and misguided decisions (Kelly et al., 2014). While authors benefit from constructive feedback, the process of accepting feedback can be challenging due to several interrelated factors, highlighted by (Watling et al., 2023):
-
Emotional challenges,
-
Structural and process-related issues,
-
Cultural and professional pressures,
-
Experience and power dynamics,
-
Systemic inequities.
Some scholars further highlight that peer review can enhance an article’s readability and broader scholarly appeal, irrespective of its initial quality1. Additionally, it is critical to acknowledge that not all researchers have access to collegial discussions about their work, rendering peer review an indispensable channel for academic dialogue and feedback.
Reviewers typically evaluate a wide range of criteria such as those outlined in a UK House of Commons report (Peer Review in Scientific Publications: Eighth Report of Session 2010-12. Vol. 1, 2011):
-
Study design and methodological rigor,
-
Soundness of results,
-
Transparency of data used in the study,
-
Interpretation of results,
-
Whether study objectives are met,
-
Completeness of the study (preliminary vs. final),
-
Scientific novelty and significance,
-
Ethical compliance.
International journals often require evaluation of language proficiency. However, criteria vary across disciplines and journals.
By the early 21st century, peer review was labelled as “broken” by some critics (McCook, 2006). The main problem noted by McCook is the increasing number of manuscripts and the burden on reviewers. This, however, only scratches the surface. (Allen et al., 2022) highlighted the issue of the “black box”: while anonymity of traditional peer review aims to uphold integrity, it may also suppress discussion, perpetuate biases, and entrench dominant paradigms, stifling innovative ideas. As Academician L.I. Abalkin noted, “no one has the right to usurp the truth” (Sukharev, 2020, p. 44). Without reform, science risks stagnation or a shift towards alternative communication channels.
(R. Smith, 2006) linked peer review to democracy: “a system full of problems but the least worst we have” (p. 178). Yet can we consider peer review as a uniform concept, given the variety of existing models? Which model of peer review best fulfills the functions of scholarly communication?
This study aims at critical evaluation of the effectiveness, challenges, and implications of conventional and emerging peer review models, such as the Publish-Review-Curate framework and deconstructed publication approaches (registered reports and modular publishing), in fostering transparent, efficient, and equitable scholarly communication. The analysis covers historical developments and contemporary innovations to identify trends shaping peer review practices. Section 2 briefly outlines the evolving functions of scholarly communication and peer review’s role in this framework. Section 3 describes the methodology of the study, followed by an analysis of peer review development and its current crisis. Subsequent sections explore solutions to the crisis and peer review’s adaptation to global publishing changes. While focused on scientific articles, the findings of the study can be extended to books and conference proceedings2. At the same time, review for other purposes, e.g., evaluation of grant applications, is a topic for a separate discussion.
2. Functions of Scholarly Communication
(Kling & McKim, 1999)) outlined three building blocks of scholarly publishing as a form of communication: publicity, accessibility, and trustworthiness. While publicity and accessibility relate to the dissemination of research outputs, trustworthiness ensures credibility through peer review, journal reputation, and sponsorship. Thus, scholarly communication is traditionally believed to serve multiple functions summarized in Table 1.
|
Function |
Brief description |
Reference |
|
Registration |
Establishes priority and ownership of ideas |
(Bohlin, 2004; Roosendaal et al., 2001) |
|
Archiving |
Ensures long-term preservation and accessibility of knowledge |
(Bohlin, 2004; Roosendaal et al., 2001) |
|
Awareness/Distribution |
Disseminates research to relevant audiences |
(Kling & McKim, 1999; Roosendaal et al., 2001) |
|
Certification/Quality Control |
Validates research outputs mainly through peer review |
(Bohlin, 2004; Roosendaal et al., 2001) |
While traditional models emphasize peer review as indispensable for certification (Kling & McKim, 1999), Bohlin (2004) highlighted fields like physics where preprint servers (e.g., arXiv) reduced reliance on journals for quality control. Tensions arise as rapid dissemination via preprint servers, while enhancing accessibility, also may undermine the traditional quality control function.
In the current stage, the link between scientific communication and peer review becomes more complex, as peer review adopts both pre- and post-publication validation roles (Chtena et al., 2025). Preprint servers disrupt academic publishing by separating dissemination (via preprints) from certification, which may now occur post-publication through overlay journals, third-party review platforms, or journals integrating preprints into their workflows. Initiatives such as “Publish-Review-Curate”, discussed in subsection 5.2, challenge the traditional gatekeeping function of academic journals.
The open access movement further redistributed functions of scholarly communication, putting an emphasis on free access over traditional subscription-based trust. Finally, the traditional roles of scholarly communication are no longer seen as static functions but as interdependent processes that evolve with digital innovation (Baffy et al., 2020).
3. Data and Methods
The first task of this study is to trace the evolution of peer review over time. While our analysis of the evolution of peer review incorporates elements of historical analysis, it does not fully qualify as such because primary sources were not directly utilized. Instead, it functions as an interpretative literature review, and one that is intentionally concise, as a comprehensive history of peer review falls outside the scope of this research.
The literature review deliberately employs an interpretative, rather than systematic, methodology. The core objective necessitates synthesizing conceptual developments and illustrative models of peer review, not an exhaustive cataloguing of every publication. Consequently, sources were identified through targeted searches across major academic platforms and repositories, including Semantic Scholar, Google Scholar, and pertinent grey literature channels. The selection process was guided by inclusion criteria: works were considered only if they primarily described, analyzed, or proposed typologies of peer review models. Studies where peer review was merely tangential or incidental were excluded.
For modeling and describing the processes that constitute various peer review models, the BPMN (Business Process Model and Notation) was employed, a notation traditionally used for business process modeling. BPMN is a widely adopted standard for business process modeling, offering a graphical notation that is easy to use (Völzer, 2010). BPMN excels at representing behavioral aspects of processes, enabling its application across diverse domains (Perry, 2006). However, modeling other dimensions of processes may require supplementary methods (e.g., narrative modeling within this study). The graphical process notation was developed using the open-source software Draw.io3.
The review of innovations in peer review is based on recent work by (Waltman, Kaltenbrunner, et al., 2023), while incorporating specific modifications and an interpretive literature analysis. This approach refines the categorization of innovations while aligning with existing scholarly discourse. This method synthesizes research while maintaining an interpretive epistemology (Weed, 2005). Interpretive reviews aim to balance the contributions of research literature and practitioner perspectives by incorporating both extracted data and commentary into the analysis (Kahn et al., 2008). This methodology allows for the development of practical understanding within a field and bridges the gap between research and practice (Russell, 2005). I aim to bridge the gap between research, policy, and practice by analyzing and interpreting both academic and gray literature. For sourcing academic literature, Semantic Scholar was primarily utilized, while standard web search engines (such as Google Scholar) were employed to identify grey literature and partially supplement academic sources.
4. Evolution and Crisis of Peer Review
The practice of pre-publication peer review as understood today emerged significantly later than the founding of the earliest academic journals. For example, Journal des Sçavans (1665), widely regarded as the first academic journal, printed a disclaimer on its first page “We aim to report the ideas of others without guaranteeing them” (Rennie, 1999, p. 2). However, (Kronick, 1990) argued that peer review in the broad sense – informal feedback from peers – existed as early as the scientists began to exchange research findings. This took the form of post-publication critiques, such as letters, reviews, and comments, particularly for books.
A narrower concept of peer review, defined as evaluation of scientific work by peers before publication (pre-publication review), first appeared in 1731 in the first issue of Medical Essays and Observations, published by the Royal Society of Edinburgh. In 1752, the Royal Society of London took responsibility for publication of Philosophical Transactions and established the “Committee on Papers.” This Committee, composed of research fellows with the highest topical expertise, evaluated scientific claims through debates at Society meetings (Berkenkotter, 1995; Moxham & Fyfe, 2018). Notably, the Committee collectively made publication decisions but avoided endorsing content, reflecting a focus on institutional reputation rather than epistemic authority (Moxham & Fyfe, 2018).
The 19th century marked a shift toward formalized refereeing. The Royal Society introduced written referee reports in 1832, initially emulating the French Académie’s expert evaluations (Moxham & Fyfe, 2018). However, practices remained inconsistent, with referees often providing stylistic feedback rather than rigorous validation (Burnham, 1990). By the late 19th century, learned societies like the Royal Society used refereeing to allocate prestige and manage publication costs, while independent journals relied on editors’ judgments (Burnham, 1990; Moxham & Fyfe, 2018). Learned societies emphasized collective decision-making and expert refereeing to safeguard finances and prestige, whereas editorial peer review initially prioritized rapid publication over formal evaluation (Burnham, 1990; Hooper, 2019). These historical practices laid the groundwork for modern systems but differed in purpose, lacking today’s emphasis on universal quality control and certification of new knowledge (Moxham & Fyfe, 2018).
In the late 1890s, printed peer review report form (checklist) began supplementing invitation letter (Fyfe, 2019). However, standardized pre-publication peer review by referees became widespread only after World War II (Chapelle, 2014). For example, The Lancet adopted mandatory peer review in 1976. The post-war surge in manuscript submissions drove journals to institutionalize “entry filtration” systems, solidifying pre-publication peer review as a dominant model by the late 21st century (Fig. 1).

The institutionalization of peer review is manifested in the development of ethical principles which have been adopted by the majority of the academic community. One of the most well- known documents, (COPE, 2013), was developed by the Committee on Publication Ethics (COPE). This document contains basic principles for reviewers, which have become common practice in the workflow of academic publishers worldwide. Furthermore, most academic journals have a section on their website that describes the peer review policy applied by the journal.
Despite the progress in editorial policies, which we observe in the formalization of requirements for the peer review process, concerns have been expressed since the end of the 20th century about a crisis in peer review as an institution. In the Introduction, I have already mentioned some of the existing issues, and now I will consider them in more detail. In particular, the following issues can be highlighted:
-
The rapidly growing volume of manuscripts, coupled with an increasing workload for researchers and faculty, has contributed to a critical shortage of reviewers. The primary reason for declining review invitations is time constraints (Tite & Schroter, 2007; Willis, 2016). This issue causes extension of review periods and exacerbates authors’ dissatisfaction.
-
To address the shortage of reviewers, journals often expand their search criteria, sometimes assigning manuscripts to reviewers with insufficient subject-matter expertise. This practice aligns with a low level of consensus among reviewers (Bornmann, 2011), leading some research to refer to peer review as a “game of chance” (Neff & Olden, 2006). Furthermore, the low quality of peer review has been implicated in the broader reproducibility crisis plaguing scientific research (Stoddart, 2016). While multiple factors drive this crisis4, deficiencies in the peer review process remain a significant contributor.
-
Current peer review practices perpetuate inequities towards specific groups in academia. Anonymity, often intended to mitigate bias, can instead obscure discriminatory tendencies creating a “black box” problem. Despite institutional commitments to equity and inclusivity in scientific publishing (COPE, 2021), a few groups still dominate scientific periodicals, such as male authors from the United States and the United Kingdom. For example, (O. M. Smith et al., 2023) analyzed 300,000 manuscripts in biological sciences and found that authors from historically excluded communities face worse outcomes in peer review, with journal interventions failing to curb reviewer bias. However, this issue remains debatable: (Squazzoni et al., 2021) observed no systematic bias against women-authored submissions across disciplines, noting occasional preferential treatment in specific fields. On the contrary, (Walker et al., 2015) found significant effects of the author characteristics and the author’s institution on the review outcomes. However, it is impossible to determine whether this was due to objective differences in scientific merit or implicit bias.
-
Peer review is also often seen to prioritize consensus over scientific novelty, inadvertently reinforcing mainstream paradigms at the expense of novel ideas. This conservatism can suppress disruptive findings (Steinhauser et al., 2012), favoring incremental advances while marginalizing unconventional approaches (Hess, 1975). As a result, it may limit opportunities for game-changing scientific discoveries (Braben & Dowler, 2017)5. The persistence of neoclassical economics exemplifies this trend: despite its failure to predict the 2008 financial crisis (Keen, 2015) and mounting critiques (Williams & McNeill, 2005), it retains dominance in both academic circles and policy frameworks, suggesting institutional inertia in peer review systems.
-
The inefficiency of the current peer review process manifests in two ways: protracted timelines delaying knowledge dissemination (see point 1) and redundant evaluations due to serial submissions. (Aczel et al., 2021) found that in 2021, reviewers worldwide spent over 100 million hours – equivalent to more than 15,000 person-years – with associated costs exceeding $1.5 billion (USA), $600 million (China), and $400 million (UK). These figures raise urgent questions about the cost-effectiveness of current practices.
5. Innovations in Peer Review
In the previous section, the crisis facing the traditional peer review model has been outlined. This raises the question of how the crisis can be overcome. Recent scholarship has increasingly focused on innovations in peer review practices (see reviews by (Kaltenbrunner et al., 2022; Woods et al., 2022). (Waltman, Kaltenbrunner, et al., 2023) proposed a framework for categorizing the innovations, identifying four “schools of thought”: Quality and Reproducibility, Democracy and Transparency, Equity and Inclusion, and Efficiency and Incentives. This section adopts this taxonomy to organize the discussion around emerging alternatives to conventional peer review.
At the same time, I have updated the typology and introduced a third tier of the hierarchy, where it is applicable (Fig. 2).

The taxonomy explicitly extends the framework proposed by Waltman et al. (2023). My contribution lies in augmenting this structure through the introduction of the third, more granular level of classification. This novel tier functions to specify particular operational models and practices that are conceptually situated within a particular school of thought. For instance,
deconstructed publication is categorized under the Quality and Reproducibility school, principally because its core justification emphasizes strengthening methodological rigor and
reproducibility, achieved via mechanisms like preregistration and modular dissemination. At the same time, transparency elements intrinsic to deconstructed publication also arguably resonate with the Democracy and Transparency school, illustrating a common situation where innovations cross strict categorical boundaries. Hence, placement decisions tend to reflect the dominant underlying intent of each model, while consciously acknowledging the frequent cross-cutting nature of peer review innovations. The essential function of this third tier, therefore, is to operationalize the schools of thought by mapping them onto specific, recognizable peer review methodologies.
I should also acknowledge that the list of innovations in peer review presented in this paper is not comprehensive. For instance, I could mention ranking papers instead of reviewing them or bidding for papers (Birukou et al., 2011). However, these initiatives mainly did not go beyond the pilot phase; so, they have not significantly affected the publishing landscape.
5.1. Quality and Reproducibility
Training reviewers through workshops, face-to-face sessions, and self-taught courses is a strategy employed by many publishers6. However, empirical studies indicate that such interventions have only limited impact on improving the quality of peer review. In a randomized controlled trial (RCT) by (Schroter et al., 2008), reviewers detected only ~3/9 major errors on average, with training interventions (face-to-face or self-taught) yielding minor, short-term improvements. These gains diminished by the third review (Paper 3), suggesting no sustained benefit. Similarly, a trial by BMJ found self-taught training marginally improved review quality scores (2.85 vs. 2.56 control) and error detection, but effects were not editorially significant and faded over time (Schroter et al., 2004). A meta-analysis of five RCTs evaluating peer review training interventions found no significant improvement in review quality (Bruce et al., 2016). Another systematic review of various training programs reported only marginal and statistically inconclusive effects on reviewers’ performance (Galipeau et al., 2015).
Software leveraging artificial intelligence (AI) for automated evaluation of scientific papers emerged in the 2010s7, exemplified by tools such as StatReviewer8 and UNSILO9. This category also includes statcheck10, designed to verify statistical analysis. At that time, such tools were treated as supplementary aids incapable of replacing human expertise (Baker, 2015; Heaven, 2018).
The emergence of Large Language Models (LLMs) in the 2020s has partially reshaped scholarly practices. LLMs are most effective in augmentation roles, such as drafting reviews, summarizing sections, or identifying methodological inconsistencies, but require human validation for accuracy and context (Díaz et al., 2024; Khraisha et al., 2024). (Díaz et al., 2024) argue that LLMs lack critical analysis and struggle with high-level reasoning, making them unsuitable for standalone evaluations. LLMs risk amplifying biases, breaching confidentiality, and producing non-reproducible feedback due to opaque training data and evolving outputs (Hosseini & Horbach, 2023). Over-reliance on LLMs may undermine epistemic norms, such as universalism, by perpetuating status-quo biases in scholarly evaluation (Hosseini & Horbach, 2023), while also posing a significant risk of manipulation bias (Ye et al., 2024). Despite ethical and methodological challenges, the integration of LLMs into peer review processes is likely to expand significantly in the coming years. However, transparency is critical: reviewers and editors must disclose LLM use and assume responsibility for outputs’ accuracy and tone (Hosseini & Horbach, 2023). Thus, academic consensus is that LLMs cannot replicate human judgment in critical analysis, bias mitigation, or epistemic community-building. Current evidence advocates for cautious, transparent integration of LLMs as assistive tools, with human oversight remaining indispensable.
The growing reliance on data in scientific research has prompted publishers to formalize the review of datasets (e.g., PLOS (A Reviewer’s Quick Guide to Assessing Open Datasets, n.d.). Similar scrutiny is increasingly applied to code used in research11 to ensure transparency and reproducibility. While emerging initiatives in peer review promote quality control through explicit reviews of source code and datasets (Kaltenbrunner et al., 2022) empirical studies assessing the effectiveness of such initiatives are nearly absent.
Within the Quality and Reproducibility School, deconstructed publication, which is a type of scholarly publishing where the research is communicated in separate stages, rather than as a single, traditional journal article (Johnson, 2024), represents a distinct group of models. This approach is sometimes termed fragmented publishing that involves disseminating a single study in multiple publications (Frandsen et al., 2019), enabling uniformity, reliability, and integrity in scholarly output when a single study is disseminated across multiple formats or iteratively updated over time (Challenger et al., 2000). Originally conceived to mitigate ethical concerns such as data manipulation, deconstructed publication encompasses two specific forms relevant to this discussion, registered reports and modular publishing. While deconstructed publication innovates general publishing workflows rather than peer review itself, deconstructed publication models may substantially influence peer review practices.
Registered reports (RRs) exemplify a hypothesis-driven empirical publication format (Registered Reports: Peer Review before Results Are Known to Align Scientific Values and Practices., n.d.) (Fig. 3). Researchers submit study protocols (Stage 1 manuscript) during early stages of investigation, undergoing initial peer review focused on the research question and methodology. Approval at this stage leads to provisional acceptance, after which data collection and analysis proceed. A final review stage (Stage 2 manuscript) then evaluates adherence to the preregistered protocol, ensuring methodological rigor in alignment with the approved design.


Figure 3. Registered Reports Publication Workflow (Model 2).
RRs have empirically proved their efficiency in mitigating publication bias. (Scheel et al., 2021) compared 71 RRs with 152 standard psychology studies, finding that only 44% of RRs reported positive results for their first hypothesis, compared to 96% in standard reports. This stark difference suggests RRs reduce selective reporting and Type-I error inflation, supporting their role in enhancing research credibility. Thus, the authors argue that RRs counteract publication bias by decoupling study acceptance from results (Scheel et al., 2021). (Soderberg et al., 2021) found that RRs significantly outperform traditional publications in psychology and neuroscience across multiple quality metrics, including methodology, analysis, and overall paper quality, while maintaining comparable levels of novelty and creativity despite preregistration requirements.
An important advantage of RRs over other peer review models lies in their capacity to enhance research efficiency. By conducting peer review at Stage 1, researchers gain the opportunity to refine their study design or data collection protocols before empirical work begins. Other models of review can offer critiques such as “the study should have been conducted differently” without actionable opportunity for improvement12.
However, a survey by (Sarafoglou et al., 2022) revealed that while researchers acknowledge preregistration improves hypothesis formulation, experimental design, and data management, they also report significant drawbacks. Among 299 researchers with preregistration experience, 73% noted increased work-related stress, and 78% observed longer project durations. These practical burdens stem from the need for rigorous planning, peer review delays, and adherence to predefined protocols. Additionally, researchers without preregistration experience (n = 56) were less likely to recommend the practice, with only 45% endorsing it, citing concerns about inefficiency and compatibility with exploratory research (Sarafoglou et al., 2022).
Critiques of RRs include their limited impact on theoretical rigor and potential stigmatization of non-preregistered studies. (Scheel et al., 2021) caution that RRs do not address weak theoretical foundations, which remain a critical issue in psychology. (Sarafoglou et al., 2022) further highlight disparities in adoption across disciplines, with fields like animal research or industry collaborations perceiving preregistration as less feasible. Some respondents also noted that journals occasionally penalize deviations from preregistered plans, undermining flexibility in data analysis.
(Syed, 2023) examined how editors and peer reviewers engage with preregistration protocols during manuscript evaluation. Analyzing 201 articles from PLOS journals with open peer review histories, results reveal minimal engagement. 43% of articles had at least one editor/reviewer mention preregistration, dropping to 14% for accessing preregistrations and 10% for comparing plans to manuscripts. At the individual editor/reviewer level (n = 689), engagement plummeted further: 18% mentioned preregistration, 5% accessed plans, and 3% evaluated alignment with manuscripts. When reviewers assessed preregistrations, most (73%) identified undisclosed deviations (e.g., unregistered analyses, unreported preregistered methods). These findings suggest peer review rarely verifies preregistration adherence, undermining its credibility. The author argues reviewers must prioritize evaluating preregistrations, while authors should transparently report deviations. Without systematic scrutiny during review, preregistration risks becoming a superficial marker of transparency rather than rigor. A recent innovation, RegCheck13, aims to address this issue, employing LLMs to systematically compare preregistered research plans with published scientific papers. This tool enables researchers to efficiently assess whether completed studies align with their original protocols and identify any deviations.
A related initiative called Lifecycle Journal14 has been recently launched by the Center for Open Science. This endeavor combines the benefits of deconstructed publication with the post- publication or publish-then-review model, which will be discussed in the next subsection.
A distinct form of deconstructed publication is modular publishing (Fig. 4). Unlike preprint servers, which disseminate complete manuscripts, modular platforms publish individual research components such as hypotheses, methodologies, datasets, and code. These components correspond to discrete stages of the research lifecycle, enabling iterative feedback at each stage. Two prominent examples of this model are ResearchEquals15 and Octopus16. ResearchEquals permits authors to upload 37 research modules, including “Review” and “Other” items. The research modules can be uploaded in any sequence. In contrast, Octopus requires seven research elements to be published in a predefined order, aligning more closely with the structure of empirical research. Thus, both platforms support open post-publication review, though they differ functionally: Octopus allows revisions to published modules, while ResearchEquals lacks versioning capabilities.

3a –no versioning of published modules (ResearchEquals). 3b – modules can be revised (Octopus).
5.2. Democracy and Transparency
Peer review models focusing exclusively on the rigor and soundness of the research rather than its significance and relevance, as implemented by journals such as PLOS ONE and Scientific Reports, share similarities with registered reports. However, it is distinguished by the single-stage evaluation process. While this model maintains the traditional editorial workflow, it shifts the focus from broader interpretative claims to the technical soundness of research. The rationale for this paradigm is the assumption that the academic community at large is better positioned to evaluate the significance and contribution of the study, rather than editors and peer reviewers alone (Spezi et al., 2017).
Results-blinded peer review is an innovative approach to manuscript evaluation that aims to reduce biases associated with traditional peer review processes. By masking the results during the initial review stages, this method encourages a focus on the quality of the research design and methodology rather than the outcomes (Grand et al., 2018; Järvinen et al., 2014; Locascio, 2017; Woznyj et al., 2018). Reviewers are less likely to favor positive outcomes, leading to a more balanced representation of research findings (Grand et al., 2018; Woznyj et al., 2018).
Researchers may focus more on methodological soundness and theoretical contributions, enhancing the overall quality of research (Locascio, 2017; Woznyj et al., 2018).
Open Peer Review represents a broad conceptual category encompassing several distinct models of evaluation. Open Reports and Open Identities are characterized by their expansion of conventional peer review practices. While these approaches retain the core framework of pre- publication evaluation, they modify its operational parameters by introducing transparency mechanisms absent in conventional systems. (Wolfram et al., 2020) identified 617 journals that published at least one article with open identities or open peer review reports as of 2019.
Open Reports facilitate the publication of review reports alongside articles, providing readers with insights into the review process. They add a layer of scrutiny, potentially improving the quality of reviews as they are subject to public examination (Fox, 2021; Ross-Hellauer, 2017). Open Identities are thought to increase accountability, motivating reviewers to provide thorough evaluations. Open Reports and Open Identities can enhance transparency in the peer review process, potentially improving the quality of research outputs by allowing for more constructive feedback and accountability among reviewers (Ross-Hellauer & Horbach, 2024). At the same time, evidence suggests that female reviewers are less likely to sign their reviews, which may discourage their participation in the review (Fox, 2021). Some researchers express concerns that open identities could lead to biased reviews and discourage honest feedback, particularly for junior researchers (Ross-Hellauer & Horbach, 2024). A further limitation arises when manuscripts are rejected: in this case, peer review reports remain accessible solely to authors.
The Publish-Review-Curate (PRC) model, often referred to as preprint-based peer review17, is the next step towards full transparency in scholarly communication. This framework radically redefines the role of peer review. It is no longer a gatekeeping tool, but rather a platform for discussion. Publication is no longer the final stage of work; it becomes a foundation for iterative updates. Platforms such as eLife18, Peer Community in19, F1000Research20, and MetaROR21 exemplify the PRC model as shown in Fig. 5. It is important to note that PRC’s implementation varies upon the context. E.g., in the case of MetaROR (eLife, Peer Community in, etc.), the manuscript is initially posted by the author on preprint servers such as arXiv, MetaArXiv, SocArXiv, bioRxiv, or OSF Preprints, while F1000Research operates a repository of its own.
Among the key players in PRC implementation is Copernicus Publishing, a publisher specializing in geoscience journals. For instance, journals like Atmospheric Chemistry and Physics22 employ a two-stage publication model: manuscripts are initially released as non-peer- reviewed “discussion papers” to solicit community input, after which revised versions undergo formal peer review.

Model 4a involves uploading the manuscript directly to a platform (e.g., F1000Research). Model 4b, on the other hand, involves initially posting a preprint on a preprint server followed by peer review on a peer review platform (e.g., eLife, MetaROR, or Peer Community in).
Model 4b is sometimes referred to as overlay journals which are academic journals that operate by overlaying peer review and editorial processes onto preprints hosted in open access (OA) repositories (Corker et al., 2024; Rousi & Laakso, 2024).
Collaborative peer review is a model where multiple reviewers evaluate a manuscript together, rather than independently. Each reviewer reads the manuscript and prepares a written evaluation, similar to traditional peer review. However, the key difference is that reviewers share their evaluations and engage in discussions to reach a consensus on the review (An et al., 2023). It is a decentralized process that enhances the quality of manuscript evaluations through independent assessments, group discussions, and consensus building, ultimately benefiting both the reviewers and the scientific community. PREreview23 exemplifies collaborative decentralized peer review through its Live Reviews, enabling open real-time discussion of preprints.
5.3. Equity and Inclusion
The principles of equity and inclusion, as well as the inappropriateness of biases of different origins (geographic, gender, ethnicity), are reflected in numerous recommendations (COPE, 2021; Royal Society of Chemistry, 2020) and policies of most major academic publishers.
However, as mentioned above, the results of implementing these policies are still far from successful, and perhaps these processes require more time.
Some studies suggest gender bias exists in the peer review process. Analyzing reviewer feedback from the American Political Science Review between 2007 and 2020, (König & Ropers, 2022) found that male reviewers were more likely to give favorable evaluations to male-authored manuscripts, while female reviewers showed similar bias toward female-authored submissions. Manuscripts reviewed by both male and female reviewers exhibited less gender bias, with similar evaluation standards across author genders. (Murray et al., 2018) found that manuscripts with male last authors had a 7% higher acceptance rate compared to those with female last authors when reviewed by all-male teams, while mixed-gender reviewer teams showed smaller, non-significant differences. Logistic regression analyses confirmed these disparities persisted even after controlling for institutional prestige, submission year, and other variables.
However, empirical studies reveal controversial outcomes. Based on a large-scale analysis of 145 scholarly journals involving approximately 1.7 million authors and 740,000 referees, (Squazzoni et al., 2020) found no systematic evidence of gender bias against women in peer review processes. Manuscripts authored or co-authored by women were generally treated as favorably or slightly more favorably than those by men, particularly in biomedicine and health sciences.
However, social sciences and humanities journals showed relatively less favorable outcomes for women. Editors exhibited gender homophily by matching authors and referees by gender, but this did not translate into systemic disadvantages for women.
Many journals have predominantly white editorial boards, which can perpetuate biases against marginalized authors (Bancroft et al., 2022). (O. M. Smith et al., 2023) found that authors from Asia, non-English-speaking countries, and low-Human Development Index countries faced worse review outcomes. The authors partially attributed these disparities to the lack of diversity among editors and reviewers, who were predominantly from North America and Europe. (Zumel Dumlao & Teplitskiy, 2023) argued that that authors from wealthier countries were more likely to be assigned same-country reviewers, who were more likely to give positive reviews.
Diverse editorial boards are often viewed as tools for mitigating biases in academic publishing. E.g., a field experiment showed that racially diverse editorial boards reduced disparities in perceptions of journal fairness and willingness to submit research between race scholars and non–race scholars (Auelua-Toomey & Roberts, 2022). (Fox et al., 2019) found that female editors were more likely to invite female reviewers, which could encourage more submissions from female authors. Another study showed that female editors increased the share of published articles authored by women (Bransch & Kvasnicka, 2022). However, there remains a notable lack of empirical evidence on the effectiveness of such policies.
Double-blind peer review is intended to protect the identity of the author and thereby prevent bias in the review. This practice has been used for quite a time in the social sciences and humanities (Horbach & Halffman, 2020; Karhulahti & Backe, 2021). In the Global South, this practice is quite common for natural sciences and medicine as well (Fontenelle & Sarti, 2021). A study on the International Conference on Learning Representations (ICLR) found that after implementing double-blind review, scores for prestigious authors decreased, suggesting a reduction in prestige bias (Sun et al., 2022). (O’Connor et al., 2017) argued that double-blind peer review process is largely effective in minimizing bias; however, perceived unblinding of authors or institutions is associated with higher manuscript acceptance rates even in double-blind peer review systems.
However, anonymity is very conditional – there are still many “keys” left in the manuscript, by which one can determine, if not the identities of the authors, then their countries, research groups, or affiliated organizations. On the other hand, the reviewer’s identity is much more securely protected. This issue is especially evident in localized communities: in Russia we often encounter deliberately positive or deliberately negative reviews (Sukharev, 2020). The same is true in specialized fields where reviewers may have conflicts of interest (Rühli et al., 2009).
5.4. Efficiency and Incentives
Current academic publishing disproportionally benefits publishers, as reviewers mainly work unpaid – a form of “academic exploitation” funded largely from public funds. This is the viewpoint I adhere to; however, I must acknowledge that it is debatable. An idealistic view on peer review is an integral part of academic service, fostering a sense of community and collaboration among researchers. Peer reviewers typically hold positions at academic or research institutions, where they receive a salary for their primary responsibilities, which may include research, teaching, and administrative duties. The salary from these institutions does not specifically compensate them for peer reviewing, as this task is often considered part of their professional duties (Bellini, 2007). Some journals and publishers have begun to explore direct compensation models for peer reviewers, recognizing the time and expertise required for thorough reviews. At the same time, financial incentives may encourage hasty reviews, resulting in lower quality assessments, as reviewers might prioritize speed over thoroughness to maximize earnings (Garg, 2015).
However, any work requires not only internal motives, but also external incentives. Peer review, a cornerstone of scientific publishing, requires appropriate recognition. Current recognition mechanisms include certificates of recognition from academic publishers, as well as records integrated into researchers’ profiles on platforms such as Web of Science, ORCID, etc. However, peer review activities remain largely unaccounted for institutional and national systems of research evaluation, reward and recognition. I should note that open review increases the visibility of reviewers’ work, facilitating recognition through the disclosure of their identities and published comments (Silveira & Abadal, 2024).
As mentioned above, traditional peer review faces inefficiencies. This issue arises when an article rejected in one journal is resubmitted to another, where peer review restarts from scratch. One way to solve this problem would be to transfer reviews between journals, also known as “portable peer review.” At the moment, this model is mainly used by large publishing houses (manuscript transfer to another journal of the same publishing house). There are also consortia of journals, such as the Neuroscience Peer Review Consortium (Saper et al., 2009), as well as Manuscript Exchange Common Approach (MECA), an initiative that supports the exchange of manuscripts and reviews between journals and platforms, including preprint servers (NISO RP- 30-2023, Manuscript Exchange Common Approach (MECA) (Version 2.0.1), 2023). Although review exchange reduces peer review costs, it does not significantly change the editorial workflow; thus, it is simply an add-on to Model 1 (conventional model).
The idea of exchanging reviews has evolved into journal-independent peer review. Review Commons24, a consortium of 23 life sciences journals, brought this idea into practice. A manuscript is first published on a preprint server and undergoes independent review, after which the author can revise the paper and submit it to one of the consortium members. In my opinion, improving the quality of peer review is achieved by ensuring that reviewers focus on the manuscript itself, rather than the question whether it fits a particular journal. Focus on journal’s scope, prestige, or specific thresholds can lead to biases, such as undervaluing methodologically sound but niche studies or overemphasizing flashy but superficial results. Journal-independent peer review fits into the workflow of Model 3 (“Publish-Review-Curate”), where manuscripts are first published and then reviewed. The two models share conceptual similarities, including a focus on transparency, community-driven curation, and flexibility.
6. Comparative Analysis of Peer Review Models
In the previous sections, I briefly examined the evolution of the peer review and its current crisis in relation to scholarly communication. Next, I explored the main innovations in peer review, which can be classified according to the course of proposed changes (e.g., towards greater transparency, reproducibility, or efficiency). Synthesizing these developments, it becomes possible to distinguish four overarching models of peer review based on their reconfiguration of scholarly communication workflows:
-
Model 1: conventional model (pre-publication peer review – gatekeeping),
-
Model 2: registered reports (two-stage evaluation),
-
Model 3: modular publishing (granular, iterative review),
-
Model 4: Publish-Review-Curate (PRC model – post-publication curation).
Table 2 presents comparative characteristics of these models.
|
Comparison options |
Model 1 |
Model 2 |
Model 3 |
Model 4 |
|
Content available to the reader |
|
|
|
|
|
Editor’s role |
|
|
None |
Curation: evaluation of scientific work, organization and oversight of peer review process (without making publication decisions) |
|
What does peer review entail? |
Manuscript evaluation in order to identify its strengths and weaknesses, help authors improve their work, and finally make an acceptance decision (1 stage) |
Stage 1: evaluation of the strengths and weaknesses of the study design, its rigor, and making final acceptance decision Stage 2: evaluation of the conducted study in accordance with the preregistered design and making final acceptance decision (2 stages) |
Option 1: peer review with an opportunity to revise the research module. Option 2: peer review with an opportunity to update within the next stage of research process (manuscript preparation). |
Manuscript evaluation aimed at identifying its strengths and weaknesses. Some PRC initiatives also make decisions on inclusion into curated collections (Peer Community In, F1000 Research). |
|
Object of review |
Manuscript as a whole (methodology, relevance, novelty, soundness, etc.), sometimes only the soundness |
Manuscript in terms of study design and execution |
Research module |
Manuscript as a whole (methodology, relevance, novelty, results, etc.) |
|
Types of research |
Any |
Empirical studies, systematic reviews25. |
Better fits empirical studies |
Any |
Model 1 emphasizes editorial control but lacks transparency, while Model 4 prioritizes open, community-driven curation over gatekeeping. Models 2 and 3 offer structured, iterative evaluation but vary in flexibility, with Model 2 focusing on methodological rigor and Model 3 enabling iterative, module-based dissemination. Now, I can compare the four models across the main functions of scholarly communication (Table 3).
|
Functions of scholarly communication |
Model 1 |
Model 2 |
Model 3 |
Model 4 |
|
Registration |
Journals register contributions through publication, but delays in peer review can delay the registration. |
Preregistration ensures early registration of methodology but not results. |
Research modules are published stage- by stage, registering contributions at each stage |
Preprints are published instantly, ensuring early registration of contributions*. |
|
Awareness/ Dissemination |
Relies on journal reputation and access. |
Preregistration provide early access to study design. However, dissemination in other aspects follows Model 1 and Model 4. |
Modules are distributed independently, increasing accessibility of research components, but the overall message may be lost. |
Preprint servers and open peer review platforms (eLife, F1000Research, MetaROR) provide instant access to a wide audience. |
|
Certification/ Quality Control |
Editors and peer reviewers ensure quality. At the same time, traditional peer review is often non-transparent, which can mask bias and poor quality of review. |
Two-stage peer review mitigates publication bias but is more time- consuming. |
Modular publishing focuses on details26. |
PRC model increases transparency. |
|
Archiving |
Journals, publishers’ websites, and electronic libraries normally guarantee long- term storage, but access depends on the publisher’s policy. While many online- only open-access journals lack robust preservation plans, journals affiliated with large commercial publishers or utilizing dedicated preservation services, such as the Public Knowledge Project, offer stronger guarantees against vanishing (Brainard, 2020). |
Similar to Model 1, but with two stages of publishing (preregistration and report). |
Modules are stored on specialized platforms (ResearchEqual, Octopus). |
Preprints are archived in open repositories (arXiv, bioRxiv, etc.). |
* Preprint is optional for Models 1 and 2.
Conventional Model 1 provides content filtering but lags behind in the speed of dissemination and transparency. Model 2 (Registered Reports) overcomes some of the shortcomings of Model 1 by focusing on methodology but is not suitable for all types of studies. Model 3 (Modular Publishing) is promising for open science, but very new. Thus, the number of adoption cases is still very limited, and therefore we don’t know much yet about its strengths and weaknesses. So far, it seems that the potential of Model 3 has not been fully realized yet.
Model 4 (PRC) generally corresponds to the vision of the International Science Council (ISC) on “more efficient and effective modes of peer review that are inspired by open norms” (International Science Council, 2023, p. 12). However, quality control through open preprint peer review is the most transparent peer review model, but it is more time-consuming for the reader than traditional pre-publication peer review. I would argue that AI is already transforming the reading workflow by changing the very essence of the “reading” function (see Bergstrom & Ruediger, 2024). I believe that AI could similarly be used to summarize the strengths and weaknesses of the papers based on open reviews, thereby not only supporting open reports, but also making Model 4 more reader-friendly. Many PRC advocates are motivated by their frustration with the conventional publishing system, and this frustration often outweighs a clear vision of the future (Hyde, 2025). This partly explains the diversity of PRC models used and the divergence in assessments of future trajectories.
Models 3 and 4, partly Model 2, raise concerns for the financial sustainability of such initiatives, because most of them are NPOs and exist at the expense of grants (subject to future research). Long-term preservation, as well as versioning and peer review of preprints require infrastructure. However, the notion that the conventional model has superior financial sustainability metrics is misguided, as evidenced by journals that have disappeared in the past and continue to disappear now. In fact, financial instability threatens all scholarly communication models and functions – see, for instance, Jamali et al., 2022.
It should be also noted that peer review innovations pull in opposing directions, with some aiming to increase efficiency and reduce costs, while others aim to promote rigor and increase costs (Kaltenbrunner et al., 2022). Ultimately, I must acknowledge that no “ideal” peer review model exists. The selection of a model, as well as the prioritization of specific scholarly communication functions, depends on the context. For this reason, a forward-looking publication platform should enable flexibility in choosing between diverse models and frameworks, which is a topic requiring further research. I must also emphasize that many innovations in peer review have existed for too short a period to allow empirical assessment of their effectiveness. To date, such evaluations have been conducted only for training of peer reviewers, RRs, open reports, and open identities, as discussed in Section 5. Another interesting point of discussion is the relationship between the genre of an article and the peer-review model used. This topic is also worthy of future research.
7. Conclusion
The comparative analysis of four peer review models reveals that no single approach universally addresses the multifaceted challenges of scholarly communication. Instead, the optimal model depends on disciplinary priorities, institutional resources, and epistemic goals. For instance:
-
If methodological rigor and reproducibility are at stake, choose Model 2 (registered reports).
-
If rapid dissemination and community-driven evaluation are critical, choose Model 4 (PRC).
-
If incremental, iterative research workflows dominate prevail, choose Model 3 (modular publishing).
Innovations in peer review can be easily combined. E.g., portable peer review can be easily integrated with open reports/identities. Registered reports/modular publishing are interoperable with the PRC model. However, I expect future research to provide empirical evidence on peer review innovations’ long-term impact on equity, reproducibility, and epistemic diversity.
My personal perspective is that the conventional publication model with a “black box” peer review inside is increasingly proving its inadequacy. I personally support Model 4 (Publish- Review-Curate) as a peer review innovation that fits the majority of the disciplines and types of studies. At the same time, I must recognize the complexity of change in academic publishing.
Academic traditions are deeply entrenched, and transforming these practices will require sustained effort over time.
PRC model is a return to the roots of scholarly communication. This model will allow all actors involved to take greater responsibility for their work, authors for their articles, reviewers for their assessments, and editors for supporting the process of scholarly communication. This is the atmosphere of scientific discussion that we need very much.
Acknowledgements
The author gratefully acknowledges the peers’ contributions. Ludo Waltman, Wolfgang Kaltenbrunner, and Denis Kosyakov reviewed draft versions of this paper and provided valuable suggestions for improvement. Special thanks to the MetaROR editors, particularly Kathryn Zeiler and Jason Chin, and peer reviewers Balazs Aczel, Martin Bush, Olmo R. van den Akker, and those reviewers who chose to remain anonymous.
Competing interests
The author is affiliated with the Centre for Science and Technology Studies at Leiden University, which is involved in the development of the MetaROR platform.
Contributions
Dmitry Kochetkov: Investigation, Visualization, Writing – original draft
Data availability statement
Data sharing is not applicable to this article as no new data was created in this study.
AI-Assisted Content Processing Statement
The author employed Deepseek-R1 (v2.3) for initial translation of literature from non- English/non-Russian languages, as well as limited assistance with grammatical restructuring and lexical optimization of the content. Human author maintained full oversight throughout this process, with all AI-generated outputs being subsequently verified, contextually adjusted, and substantively edited. Final content responsibility remains exclusively with the human author.
Notes
1 Peer feedback received during discussions of this research’s preprint (for further details, please, refer to (Kochetkov, 2024)).
2 E.g., an open science experiment during the recent Science, Technology and Innovation Indicators (STI2023) conference (Waltman, Mulati, et al., 2023).
3 Draw.io. URL: https://www.drawio.com/ (date of access: 18.03.2025).
4 In my opinion, current academic publishing is a system full of inconsistencies. Confirmation bias and publication bias are well-documented examples of such inconsistencies. They promote selective reporting and groupthink (Resnik & Smith, 2020; Rothstein et al., 2005), distort academic landscape (Yngve et al., 2008) and raise ethical concerns (Ward, 2015; Yngve et al., 2008), undermining reliability of this scholarly communication channel.
5 It is important to acknowledge that this phenomenon exhibits significant disciplinary variation. In some disciplines, critical observers on the contrary raised concerns that journals tend to give incentives for inflated claims prioritizing the publication of novel positive results (Nosek et al., 2012). This pressure forces authors to emphasize scientific novelty at the expense of reproducibility. For instance, this tendency is exemplified by the long-standing debates in psychology (Open Science Collaboration, 2015).
6 E.g., Certified Peer Reviewer Course by Elsevier. URL: https://researcheracademy.elsevier.com/navigating-peer- review/certified-peer-reviewer-course (date of access: 22.01.2024).
7 At the same time, plagiarism detection systems have existed much longer, for example, “Antiplagiat,” a well- known system in Russia, originated in 2005.
8 StatReviewer. URL: http://statreviewer.com/ (date of access: 22.01.2024).
9 UNSILO. URL: https://site.unsilo.com/site/ (date of access: 22.01.2024).
10 statcheck. URL: https://michelenuijten.shinyapps.io/statcheck-web/ (date of access: 22.01.2024), also R package.
11 Among recent initiatives, I can mention CODECHECK. URL: https://codecheck.org.uk/process/ (date of access: 22.01.2024).
12 The author gratefully acknowledges reviewer Balazs Aczel for this valuable addition to the study.
13 RegCheck. URL: https://regcheck.app/ (date of access: 02.04.2025).
14 Lifecycle Journal. URL: https://lifecyclejournal.org/ (date of access: 02.04.2025).
15 ResearchEquals. URL: https://www.researchequals.com/ (date of access: 28.02.2024).
16 Octopus. URL: https://www.octopus.ac/ (date of access: 28.02.2024).
17 The Publish-Review-Curate (PRC) model is occasionally referred to as post-publication peer review or open peer review. However, this terminological ambiguity risks conceptual misalignment: the former term overlaps semantically with processes like publication commentary or book reviews, while the latter conflates PRC with Open Reports and Open Identities addressed earlier.
18 eLife. URL: https://elifesciences.org/ (date of access: 22.01.2024).
19 Peer Community in. URL: https://peercommunityin.org/ (date of access: 22.01.2024).
20 F1000Research. URL: https://f1000research.com/ (date of access: 22.01.2024).
21 MetaROR. URL: https://metaror.org/ (date of access: 14.04.2025).
22 Atmospheric Chemistry and Physics. URL: https://www.atmospheric-chemistry-and- physics.net/peer_review/interactive_review_process.html (date of access: 06.05.2025).
23 PREreview. https://prereview.org/ (date of access: 22.01.2024).
24 Review Commons. URL: https://www.reviewcommons.org/ (date of access: 22.01.2024).
25 There is a point of view that registered reports work best for empirical studies that are of a confirmatory nature and work less well for exploratory studies. Researchers cannot necessarily anticipate all of the aspects of the data collection and analysis in the latter type of study (Arpinon & Espinosa, 2023).
26 My concern is that insufficient consideration of broader system interactions could lead to undetected systematic
errors. Current evidence remains limited, however, and this model requires further empirical investigation
References
A Reviewer’s Quick Guide to Assessing Open Datasets. (n.d.). PLOS. Retrieved January 23, 2024, from https://plos.org/resource/peer-reviewing-data/
Aczel, B., Szaszi, B., & Holcombe, A. O. (2021). A billion-dollar donation: Estimating the cost of researchers’ time spent on peer review. Research Integrity and Peer Review, 6(1), 14. https://doi.org/10.1186/s41073-021-00118-2
Allen, K., Reardon, J., Lu, Y., Smith, D. V., Rainsford, E., & Walsh, L. (2022). Towards improving peer review: Crowd-sourced insights from Twitter. Journal of University Teaching & Learning Practice, 19(3).
An, J., Mendenhall, A., & Kaeberlein, M. (2023). The collaborative peer review framework as a model for training biomedical graduate students to perform rigorous, ethical peer review. Translational Medicine of Aging, 7, 9–11. https://doi.org/10.1016/j.tma.2023.01.002
Arpinon, T., & Espinosa, R. (2023). A practical guide to Registered Reports for economists. Journal of the Economic Science Association, 9(1), 90–122. https://doi.org/10.1007/s40881- 022-00123-1
Auelua-Toomey, S. L., & Roberts, S. O. (2022). The Effects of Editorial-Board Diversity on Race Scholars and Their Scholarship: A Field Experiment. Perspectives on Psychological Science, 17(6), 1766–1777. https://doi.org/10.1177/17456916211072851
Baffy, G., Burns, M. M., Hoffmann, B., Ramani, S., Sabharwal, S., Borus, J. F., Pories, S., Quan, S. F., & Ingelfinger, J. R. (2020). Scientific Authors in a Changing World of Scholarly Communication: What Does the Future Hold? American Journal of Medicine, 133(1), 26–31. https://doi.org/10.1016/j.amjmed.2019.07.028
Baker, M. (2015). Smart software spots statistical errors in psychology papers. Nature. https://doi.org/10.1038/nature.2015.18657
Bancroft, S. F., Ryoo, K., & Miles, M. (2022). Promoting equity in the peer review process of journal publication. Science Education, 106(5), 1232–1248. https://doi.org/10.1002/sce.21733
Bellini, L. P. (2007). Recompensando revisores. Arquivos Brasileiros de Oftalmologia, 70(4), 727–728. https://doi.org/10.1590/S0004-27492007000400029
Bergstrom, T., & Ruediger, D. (2024). A Third Transformation? Generative AI and Scholarly Publishing. https://doi.org/10.18665/sr.321519
Berkenkotter, C. (1995). The Power and the Perils of Peer Review. Rhetoric Review, 13, 245– 248.
Birukou, A., Wakeling, J. R., Bartolini, C., Casati, F., Marchese, M., Mirylenka, K., Osman, N., Ragone, A., Sierra, C., & Wassef, A. (2011). Alternatives to Peer Review: Novel Approaches for Research Evaluation. Frontiers in Computational Neuroscience, 5. https://doi.org/10.3389/fncom.2011.00056
Bohlin, I. (2004). Communication regimes in competition: The current transition in scholarly communication seen through the lens of the sociology of technology. Social Studies of Science, 34(3), 365–391. https://doi.org/10.1177/0306312704041522
Bornmann, L. (2011). Scientific peer review. Annual Review of Information Science and Technology, 45(1), 197–245. https://doi.org/10.1002/aris.2011.1440450112
Braben, D., & Dowler, R. (2017, September 17). Peer review processes risk stifling creativity and limiting opportunities for game-changing scientific discoveries. LSE Impact Blog.
Brainard, J. (2020). Dozens of scientific journals have vanished from the internet, and no one preserved them. Science. https://doi.org/10.1126/science.abe6998
Bransch, F., & Kvasnicka, M. (2022). Male Gatekeepers: Gender Bias in the Publishing Process? Journal of Economic Behavior and Organization, 202(11089), 714–732. https://doi.org/10.1016/j.jebo.2022.07.031
Bruce, R., Chauvin, A., Trinquart, L., Ravaud, P., & Boutron, I. (2016). Impact of interventions to improve the quality of peer review of biomedical journals: A systematic review and meta- analysis. BMC Medicine, 14(1), 85. https://doi.org/10.1186/s12916-016-0631-5
Burnham, J. C. (1990). The Evolution of Editorial Peer Review. JAMA: The Journal of the American Medical Association, 263(10), 1323. https://doi.org/10.1001/jama.1990.03440100023003
Challenger, J., Iyengar, A., Witting, K., Ferstat, C., & Reed, P. (2000). A publishing system for efficiently creating dynamic Web content. Proceedings IEEE INFOCOM 2000. Conference on Computer Communications. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies (Cat. No.00CH37064), 2, 844–853. https://doi.org/10.1109/INFCOM.2000.832259
Chapelle, F. H. (2014). The History and Practice of Peer Review. Groundwater, 52(1), 1–1. https://doi.org/10.1111/gwat.12139
Chtena, N., Alperin, J. P., Pinfield, S., Fleerackers, A., & Pasquetto, I. V. (2025). Preprint servers and journals: Rivals or allies? Journal of Documentation. https://doi.org/10.1108/JD-09- 2024-0215
COPE. (2013). Ethical guidelines for peer reviewers (English). https://doi.org/10.24318/cope.2019.1.9
COPE. (2021). Diversity and inclusivity. https://doi.org/10.24318/RLqSoVsZ
Corker, K. S., Waltman, L., & Coates, J. A. (2024). Understanding the Publish-Review-Curate (PRC) Model of Scholarly Communication. In MetaArXiv. https://doi.org/10.31222/osf.io/h7swt
Díaz, O., Garmendia, X., & Pereira, J. (2024). Streamlining the review process: AI-generated annotations in research manuscripts. 1–16.
Fontenelle, L. F., & Sarti, T. D. (2021). Attitudes toward open peer review among stakeholders of a scholar-led journal in Brazil. Transinformação, 33, e200072. https://doi.org/10.1590/2318-0889202133e200072
Fox, C. W. (2021). Which peer reviewers voluntarily reveal their identity to authors? Insights into the consequences of open-identities peer review. Proceedings of the Royal Society B: Biological Sciences, 288(1961), 20211399. https://doi.org/10.1098/rspb.2021.1399
Fox, C. W., Duffy, M. A., Fairbairn, D. J. & Meyer, J. A. (2019). Gender diversity of editorial boards and gender differences in the peer review process at six journals of ecology and evolution. Ecology and Evolution, 9(24), 13636–13649. https://doi.org/10.1002/ece3.5794
Frandsen, T. F., Eriksen, M. B., Hammer, D. M. G., & Christensen, J. B. (2019). Fragmented publishing: A large-scale study of health science. Scientometrics, 119(3), 1729–1743. https://doi.org/10.1007/s11192-019-03109-9
Fyfe, A. (2019, September 15). Quality in peer review: a view through the lens of time. The Royal Society. https://royalsociety.org/blog/2019/09/quality-in-peer-review-a-view-through- the-lens-of-time/
Galipeau, J., Moher, D., Campbell, C., Hendry, P., Cameron, D. W., Palepu, A., & Hébert, P. C. (2015). A systematic review highlights a knowledge gap regarding the effectiveness of health-related training programs in journalology. Journal of Clinical Epidemiology, 68(3), 257–265. https://doi.org/10.1016/j.jclinepi.2014.09.024
Garg, P. K. (2015). Financial Incentives to Reviewers: Double-edged Sword. Journal of Korean Medical Science, 30(6), 832. https://doi.org/10.3346/jkms.2015.30.6.832
Grand, J. A., Rogelberg, S. G., Banks, G. C., Landis, R. S., & Tonidandel, S. (2018). From Outcome to Process Focus: Fostering a More Robust Psychological Science Through Registered Reports and Results-Blind Reviewing. Perspectives on Psychological Science, 13(4), 448–456. https://doi.org/10.1177/1745691618767883
Heaven, D. (2018). AI peer reviewers unleashed to ease publishing grind. Nature, 563(7733), 609–610. https://doi.org/10.1038/d41586-018-07245-9
Hess, E. L. (1975). Effects of the review process. IEEE Transactions on Professional Communication, PC–18(3), 196–199. https://doi.org/10.1109/TPC.1975.6591188
Hooper, M. (2019). Scholarly Review, Old and New. Journal of Scholarly Publishing, 51(1), 63–75. https://doi.org/10.3138/jsp.51.1.04
Horbach, S. P. J. M., & Halffman, W. (2020). Journal Peer Review and Editorial Evaluation: Cautious Innovator or Sleepy Giant? Minerva, 58, 139–161. https://doi.org/10.1007/s11024- 019-09388-z
Hosseini, M., & Horbach, S. P. J. M. (2023). Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. Research Integrity and Peer Review, 8(1), 1–9. https://doi.org/10.1186/s41073-023-00133-5
Hyde, A. (2025). PRC vs The Cathedral. How PRC Might Change Publishing.
International Science Council. (2023). The Case for Reform of Scientific Publishing. https://doi.org/10.24948/2023.14
Jamali, H. R., Wakeling, S., & Abbasi, A. (2022). Why do journals discontinue? A study of Australian ceased journals. Learned Publishing. https://doi.org/10.1002/leap.1448
Järvinen, T. L. N., Sihvonen, R., Bhandari, M., Sprague, S., Malmivaara, A., Paavola, M., Schünemann, H. J., & Guyatt, G. H. (2014). Blinded interpretation of study results can feasibly and effectively diminish interpretation bias. Journal of Clinical Epidemiology, 67(7), 769–772. https://doi.org/10.1016/j.jclinepi.2013.11.011
Johnson, R. (2024, June 28). Beyond the journal: The future of scientific publishing. FEBS. https://network.febs.org/posts/beyond-the-journal-the-future-of-scientific- publishing?channel_id=728-viewpoints
Kahn, P., Wareham, T., Young, R., Willis, I., & Pilkington, R. (2008). Exploring a practitioner‐ based interpretive approach to reviewing research literature. International Journal of Research & Method in Education, 31(2), 169–180. https://doi.org/10.1080/17437270802212312
Kaltenbrunner, W., Pinfield, S., Waltman, L., Woods, H. B., & Brumberg, J. (2022). Innovating peer review, reconfiguring scholarly communication: An analytical overview of ongoing peer review innovation activities. Journal of Documentation, 78(7), 429–449. https://doi.org/10.1108/JD-01-2022-0022
Karhulahti, V.-M., & Backe, H.-J. (2021). Transparency of peer review: A semi-structured interview study with chief editors from social sciences and humanities. Research Integrity and Peer Review, 6(1), 13. https://doi.org/10.1186/s41073-021-00116-4
Kelly, J., Sadeghieh, T., & Adeli, K. (2014). Peer review in scientific publications:benefits, critiques, & a survival guide. The Journal of the International Federation for Clinical Chemistry and Laboratory Medicine, 25(3), 227–243.
Khraisha, Q., Put, S., Kappenberg, J., Warraitch, A., & Hadfield, K. (2024). Can large language models replace humans in systematic reviews? Evaluating <scp>GPT</scp> ‐4’s efficacy in screening and extracting data from peer‐reviewed and grey literature in multiple languages. Research Synthesis Methods, 15(4), 616–626. https://doi.org/10.1002/jrsm.1715
Kling, R., & McKim, G. (1999). Scholarly communication and the continuum of electronic publishing. Journal of the American Society for Information Science, 50(10), 890–906. https://doi.org/10.1002/(SICI)1097-4571(1999)50:10<890::AID-ASI6>3.0.CO;2-8
Kochetkov, D. M. (2024). Post-Publication Review: Evolution of the Scientific Publishing Workflow. Economics of Science, 10(3), 8–21. https://doi.org/10.22394/2410-132X-2024-10-3-8-21
König, T., & Ropers, G. (2022). How Gendered Is the Peer-Review Process? A Mixed-Design Analysis of Reviewer Feedback. PS: Political Science & Politics, 55(1), 135–141. https://doi.org/10.1017/S1049096521000937
Kronick, D. A. (1990). Peer review in 18th-century scientific journalism. JAMA, 263(10), 1321– 1322.
Locascio, J. J. (2017). Results Blind Science Publishing. Basic and Applied Social Psychology, 39(5), 239–246. https://doi.org/10.1080/01973533.2017.1336093
McCook, A. (2006, February). Is peer review broken? The Scientist. https://www.the- scientist.com/uncategorized/is-peer-review-broken-47872
Moxham, N., & Fyfe, A. (2018). THE ROYAL SOCIETY AND THE PREHISTORY OF PEER REVIEW, 1665–1965. The Historical Journal, 61(4), 863–889. https://doi.org/10.1017/S0018246X17000334
Murray, D., Siler, K., Larivière, V., Chan, W. M., Collings, A. M., Raymond, J., & Sugimoto, C. R. (2018). Author-Reviewer Homophily in Peer Review. BioRxiv, 1–61. https://doi.org/10.1101/400515
Neff, B. D., & Olden, J. D. (2006). Is Peer Review a Game of Chance? BioScience, 56(4), 333–340. https://doi.org/10.1641/0006-3568(2006)56%5B333:IPRAGO%5D2.0.CO;2
NISO RP-30-2023, Manuscript Exchange Common Approach (MECA) (Version 2.0.1). (2023). https://doi.org/10.3789/niso-rp-30-2023
Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific Utopia. Perspectives on Psychological Science, 7(6), 615–631. https://doi.org/10.1177/1745691612459058
O’Connor, E. E., Cousar, M., Lentini, J. A., Castillo, M., Halm, K., & Zeffiro, T. A. (2017). Efficacy of Double-Blind Peer Review in an Imaging Subspecialty Journal. American Journal of Neuroradiology, 38(2), 230–235. https://doi.org/10.3174/ajnr.A5017
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251). https://doi.org/10.1126/science.aac4716
Perry, S. (2006). When is a process model not a process model: A comparison between UML and BPMN. IEE Seminar on Process Modelling Using UML, 51–63. https://doi.org/10.1049/ic:20060652
Registered Reports: Peer review before results are known to align scientific values and practices. (n.d.). Center for Open Science. Retrieved January 22, 2024, from https://www.cos.io/initiatives/registered-reports
Rennie, D. (1999). Editorial peer review: Its development and rationale. In F. Godleeand & T. Jefferson (Eds.), Peer Review in Health Sciences (pp. 1–13). BMJ Books.
Resnik, D. B., & Smith, E. M. (2020). Bias and Groupthink in Science’s Peer-Review System. In Groupthink in Science (pp. 99–113). Springer International Publishing. https://doi.org/10.1007/978-3-030-36822-7_9
Roosendaal, H. E., Geurts, P. A. Th. M., & van der Vet, P. E. (2001). Developments in scientific communication: Considerations on the value chain*. Information Services & Use, 21(1), 13–32. https://doi.org/10.3233/ISU-2001-21103
Ross-Hellauer, T. (2017). What is open peer review? A systematic review. F1000Research, 6, 588. https://doi.org/10.12688/f1000research.11369.2
Ross-Hellauer, T., & Horbach, S. P. J. M. (2024). Additional experiments required: A scoping review of recent evidence on key aspects of Open Peer Review. Research Evaluation, 33. https://doi.org/10.1093/reseval/rvae004
Rothstein, H. R., Sutton, A. J., & Borenstein, M. (2005). Publication Bias in Meta‐Analysis. In Publication Bias in Meta‐Analysis (pp. 1–7). Wiley. https://doi.org/10.1002/0470870168.ch1
Rousi, A. M., & Laakso, M. (2024). Overlay journals: A study of the current landscape. Journal of Librarianship and Information Science, 56(1), 15–28. https://doi.org/10.1177/09610006221125208
Royal Society of Chemistry. (2020). Joint commitment for action on inclusion and diversity in publishing.
Rühli, F. J., Finnegan, M., Hershkovitz, I., & Henneberg, M. (2009). Peer‐review for the peer‐ review system. Human_ontogenetics, 3(1), 3–6. https://doi.org/10.1002/huon.200900004
Russell, C. L. (2005). An Overview of the Integrative Research Review. Progress in Transplantation, 15(1), 8–13. https://doi.org/10.1177/152692480501500102
Saper, C. B., Maunsell, J. H. R., & Sagvolden, T. (2009). The Neuroscience Peer Review Consortium. Behavioral and Brain Functions, 5, 4. https://doi.org/10.1186/1744-9081-5-4
Sarafoglou, A., Kovacs, M., Bakos, B., Wagenmakers, E.-J., & Aczel, B. (2022). A survey on how preregistration affects the research workflow: Better science but more work. Royal Society Open Science, 9(7), 211997. https://doi.org/10.1098/rsos.211997
Scheel, A. M., Schijen, M. R. M. J., & Lakens, D. (2021). An Excess of Positive Results: Comparing the Standard Psychology Literature With Registered Reports. Advances in Methods and Practices in Psychological Science, 4(2), 25152459211007467. https://doi.org/10.1177/25152459211007467
Schroter, S., Black, N., Evans, S., Carpenter, J., Godlee, F., & Smith, R. (2004). Effects of training on quality of peer review: Randomised controlled trial. BMJ, 328(7441), 673. https://doi.org/10.1136/bmj.38023.700775.AE
Schroter, S., Black, N., Evans, S., Godlee, F., Osorio, L., & Smith, R. (2008). What errors do peer reviewers detect, and does training improve their ability to detect them? Journal of the Royal Society of Medicine, 101(10), 507–514. https://doi.org/10.1258/jrsm.2008.080062
Silveira, L. da, & Abadal, E. (2024). Open peer review. Ciência Da Informação Express, 5, 1–18. https://doi.org/10.60144/v5i.2024.122
Smith, G. D., & Jackson, D. (2022). Integrity and trust in research and publication: The crucial role of peer review. Journal of Advanced Nursing, 78(11). https://doi.org/10.1111/jan.15438
Smith, O. M., Davis, K. L., Pizza, R. B., Waterman, R., Dobson, K. C., Foster, B., Jarvey, J. C., Jones, L. N., Leuenberger, W., Nourn, N., Conway, E. E., Fiser, C. M., Hansen, Z. A., Hristova, A., Mack, C., Saunders, A. N., Utley, O. J., Young, M. L., & Davis, C. L. (2023). Peer review perpetuates barriers for historically excluded groups. Nature Ecology & Evolution, 7(4), 512–523. https://doi.org/10.1038/s41559-023-01999-w
Smith, R. (2006). Peer Review: A Flawed Process at the Heart of Science and Journals. Journal of the Royal Society of Medicine, 99(4), 178–182. https://doi.org/10.1177/014107680609900414
Soderberg, C. K., Errington, T. M., Schiavone, S. R., Bottesini, J., Thorn, F. S., Vazire, S., Esterling, K. M., & Nosek, B. A. (2021). Initial evidence of research quality of registered reports compared with the standard publishing model. Nature Human Behaviour, 5(8), 990– 997. https://doi.org/10.1038/s41562-021-01142-4
Spezi, V., Wakeling, S., Pinfield, S., Creaser, C., Fry, J., & Willett, P. (2017). Open-access mega- journals. Journal of Documentation, 73(2), 263–283. https://doi.org/10.1108/JD-06-2016- 0082
Squazzoni, F., Bravo, G., Dondio, P., Farjam, M., Marusic, A., Mehmani, B., Willis, M., Birukou, A., & Grimaldo, F. (2020). No evidence of any systematic bias against manuscripts by women in the peer review process of 145 scholarly journals. https://doi.org/10.31235/osf.io/gh4rv
Squazzoni, F., Bravo, G., Farjam, M., Marusic, A., Mehmani, B., Willis, M., Birukou, A., Dondio, P., & Grimaldo, F. (2021). Peer review and gender bias: A study on 145 scholarly journals. Science Advances, 7(2). https://doi.org/10.1126/sciadv.abd0299
Steinhauser, G., Adlassnig, W., Risch, J. A., Anderlini, S., Arguriou, P., Armendariz, A. Z., Bains, W., Baker, C., Barnes, M., Barnett, J., Baumgartner, M., Baumgartner, T., Bendall, C. A., Bender, Y. S., Bichler, M., Biermann, T., Bini, R., Blanco, E., Bleau, J., … Zwiren, N. (2012). Peer review versus editorial review and their role in innovative science. Theoretical Medicine and Bioethics, 33(5), 359–376. https://doi.org/10.1007/s11017-012- 9233-1
Stoddart, C. (2016). Is there a reproducibility crisis in science? Nature. https://doi.org/10.1038/d41586-019-00067-3
Sukharev, O. S. (2020). Topos of Russian peer review (on peer review as creativity, subject to amateurism). Investments in Russia, 10(309), 43–48.
Sun, M., Barry Danfa, J., & Teplitskiy, M. (2022). Does double-blind peer review reduce bias? Evidence from a top computer science conference. Journal of the Association for Information Science and Technology, 73(6), 811–819. https://doi.org/10.1002/asi.24582
Syed, M. (2023). Some Data Indicating that Editors and Reviewers Do Not Check Preregistrations during the Review Process. https://doi.org/10.31234/osf.io/nh7qw
Tite, L., & Schroter, S. (2007). Why do peer reviewers decline to review? A survey. Journal of Epidemiology & Community Health, 61, 9–12.
Völzer, H. (2010). An Overview of BPMN 2.0 and Its Potential Use. In J. Mendling, M. Weidlich, & M. Weske (Eds.), Business Process Modeling Notation. BPMN 2010. Lecture Notes in Business Information Processing, vol 67. (pp. 14–15). Springer. https://doi.org/10.1007/978-3-642-16298-5_3
Walker, R., Barros, B., Conejo, R., Neumann, K., & Telefont, M. (2015). Bias in peer review: A case study. F1000Research, 4, 21. https://doi.org/10.12688/f1000research.6012.1
Waltman, L., Kaltenbrunner, W., Pinfield, S., & Woods, H. B. (2023). How to improve scientific peer review: Four schools of thought. Learned Publishing, 36(3), 334–347. https://doi.org/10.1002/leap.1544
Waltman, L., Mulati, B., Ni, R., Wang, J., Lai, K. H. (Adrian), Luwel, M., Noyons, E., van Leeuwen, T., & Weimer, V. (2023). Preprinting and open peer review at the STI 2023 conference: Evaluation of an open science experiment. Leiden Madtrics. https://www.leidenmadtrics.nl/articles/preprinting-and-open-peer-review-at-the-sti-2023- conference-evaluation-of-an-open-science-experiment
Ward, T. N. (2015). Peer review: Toward improving the integrity of the process. Neurology, 85(20), 1734–1735. https://doi.org/10.1212/WNL.0000000000002136
Watling, C., Shaw, J., Field, E., & Ginsburg, S. (2023). ‘For the most part it works’: Exploring how authors navigate peer review feedback. Medical Education, 57(2), 151–160. https://doi.org/10.1111/medu.14932
Weed, M. (2005). “Meta Interpretation”: A Method for the Interpretive Synthesis of Qualitative Research. Forum Qualitative Sozialforschung, 6(1), 37.
Williams, J. B., & McNeill, J. M. (2005). The Current Crisis In Neoclassical Economics and the Case for an Economic Analysis Based on Sustainable Development. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.1606342
Willis, M. (2016). Why do peer reviewers decline to review manuscripts? A study of reviewer invitation responses. Learned Publishing, 29(1), 5–7. https://doi.org/10.1002/leap.1006
Wolfram, D., Wang, P., Hembree, A., & Park, H. (2020). Open peer review: Promoting transparency in open science. Scientometrics, 125(2), 1033–1051. https://doi.org/10.1007/s11192-020-03488-4
Woods, H. B., Brumberg, J., Kaltenbrunner, W., Pinfield, S., & Waltman, L. (2022). Innovations in peer review in scholarly publishing: A meta-summary. SocArXiv. https://doi.org/10.31235/osf.io/qaksd
Woznyj, H. M., Grenier, K., Ross, R., Banks, G. C., & Rogelberg, S. G. (2018). Results-blind review: A masked crusader for science. European Journal of Work and Organizational Psychology, 27(5), 561–576. https://doi.org/10.1080/1359432X.2018.1496081
Ye, R., Pang, X., Chai, J., Chen, J., Yin, Z., Xiang, Z., Dong, X., Shao, J., & Chen, S. (2024). Are We There Yet? Revealing the Risks of Utilizing Large Language Models in Scholarly Peer Review. c, 1–27.
Yngve, A., Margetts, B., & Sachdev, H. P. S. (2008). Guidance on publishing results and how we operate evidence-based policy. Public Health Nutrition, 11(7), 655–656. https://doi.org/10.1017/S1368980008002796
Zumel Dumlao, J. M., & Teplitskiy, M. (2023). The effect of reviewer geographical diversity on evaluations is reduced by anonymizing submissions. https://doi.org/10.31235/osf.io/754e3
Editors
Kathryn Zeiler
Jason Chin
Editorial assessment
by Jason Chin
This is version 3 of this work. The previous version was also reviewed on MetaROR.
In the last review, the reviewers agreed that the manuscript represented a valuable contribution, providing a critical comparison of evolving peer review models by assessing their strengths and limitations, as well as their suitability for a variety of contexts. The manuscript has now been further improved to address the limitations noted in that last round of review. In particular: the abstract no longer frames the article as historical scholarship; the methods section clearly defines the methodology as ‘interpretive, rather than systematic’; the review of peer review innovations clarifies about how it builds on Waltman et al. (2023); and the comparative analysis provides further explanation about how the author chose peer review reform models for analysis. The revised manuscript also clarifies that some instances of PRC allow authors to submit to non-partner journals. In summary, the revised manuscript has addressed all the comments from the last round of review. It provides a cogent and useful contribution to the study of peer review.
Author response
The most recent version (version 5) is available at https://doi.org/10.31235/osf.io/b2ra3_v5.
In this version, two minor revisions were made:
-
I have added two new references:
-
A foundational work by Björk (2007) (thanks to Stephen Pienfield for this suggestion)
-
A reference to the term “peer print,” which was introduced by Butler & Boisgontier in August 2025
-
-
I have also corrected the reference list by adding two previously omitted studies and updated the Acknowledgements section.”


