Published at MetaROR

October 22, 2025

Table of contents

Cite this article as:

Agrawal, A., & Koley, M. (2025). Open Reviews is a good first step. Pseudo-Anonymous Reviews can take it further. Zenodo. https://doi.org/10.5281/zenodo.16598755

Curated

Article

Open Reviews is a good first step. Pseudo-Anonymous Reviews can take it further.

Achal Agrawal1  Email ORCID, Moumita Koley2  Email ORCID

1. India Research Watch
2. DST Center for Policy Research, Indian Institute of Science

Originally published on July 30, 2025 at: 

Abstract

Nature journal recently announced that all their reviews will be henceforth open. This is a good first step that will hopefully extend to other journals in Springer Nature as well as other major publishers. It will go a long way in improving trust in Science. They will also allow reviewers to choose to be anonymous. Blind peer review is one of the pillars of the publication system and the choice of reviewers needs to be respected.

However, there have been a lot of reported cases where reviewers abuse this anonymity to ask for citations to their own articles, sometimes not even related to the paper being reviewed. Increasingly, a lot of reviews are also done by AI, leading to low quality feedback.

To mitigate these issues, one can make the reviews Psuedo-Anonymous, i.e. the reviews should be traceable. Research integrity teams can then verify the bad ‘actors’. It will also act as a deterrent for people providing shoddy, lazy or unfair reviews.

Additionally, it will help provide useful analytics for both editors as well as readers and help them take more informed decisions.

Nature journal recently announced that all their reviews will be henceforth open. This is a good first step that will hopefully extend to other journals in Springer Nature as well as other major publishers.

It will go a long way in improving trust in Science. They will also allow reviewers to choose to be anonymous. Blind peer review is one of the pillars of the publication system and the choice of reviewers needs to be respected.

However, there have been a lot of reported cases where reviewers abuse this anonymity to ask for citations to their own articles, sometimes not even related to the paper being reviewed. Increasingly, a lot of reviews are also done by AI, leading to low quality feedback.

To mitigate these issues, one can make the reviews Pseudo-Anonymous, i.e. the reviews should be traceable. Research integrity teams can then verify the bad ‘actors’. It will also act as a deterrent for people providing shoddy, lazy or unfair reviews.

Additionally, it will help provide useful analytics for both editors as well as readers and help them take more informed decisions.

How will this work?

Consider a reviewer called Jamie. He/she can review multiple papers. Each review, which will be open, will contain a different pseudonym to ensure the anonymity of the reviewer. This is what is going to happen already with open peer reviews.

Figure 1. Proposed Framework to ensure anonymity of reviewer as well as enable traceability

We propose that while the reviews remain anonymous to the public, the publisher still maintains a secure database which contains all the reviews by Jamie. This database should not be accessible to even editors. Only people with access to this highly classified database should be the research integrity teams.

How will this help?

There are multiple benefits of the proposed framework:

  1. Deterrent against lazy and malevolent reviewers: It will help publishers to be able to analyze (perhaps automatically) all the reviews by a person. If a reviewer is copy-pasting his reviews or consistently soliciting citations, then the pattern can be easily recognized by AI and flagged to the publisher. It can also help editors and publishers find reviewers who might be reviewing prolifically, perhaps with the help of AI generated reviews.

  2. Increase fairness by normalizing the reviews: It will also help reduce inconsistency in the review process as different reviewers can have different scales. It makes comparison of reviews difficult. In the proposed framework, one can normalize the reviewers score with the average score they give. This will also help make the review process fairer and less prone to vagaries of reviewers.

    Thanks to pseudo-anonymity, editors as well as readers can find out the average statistics for the reviewers and make a more informed decision.

    Figure 2. Reviewer analytics available to the editor. For average readers, the name will be anonymized.
  3. Provide incentives to reviewers by giving them credit: It is becoming increasingly difficult to find reviewers as they do not gain anything out of reviewing. A centralized database will help publishers reward reviewers who help keep the system functional.

Potential issues

Some possible issues with such a system is that it will still not address reviews across different publishers. Hopefully in the future, the publishers can coordinate amongst themselves to create a common database for profiles of different reviewers. In any case, this is still much better than complete anonymity where even two editors of journals of the same publishing house will not be able to compare notes.

A centralized database does increase the chances of data leaks. Any system developed will need to have adequate protection like encrypted storage of details in the database so that even in case of leaks, there is not much harm. Such secure and robust solutions already exist and are regularly used by internet companies who store sensitive user data.

If implemented, this solution will improve the trust in science even further as well as help catch lazy or ill-intentioned reviewers using the peer-review system to game citations or increase their influence by reviewing prolifically.

Lost Opportunity: The Invisible cost of lost Peer Review

Now, consider what happens when a manuscript is rejected after the first round of peer review. The intellectual effort invested by reviewers is effectively lost — an especially frustrating reality for academics who volunteer their time and expertise for free.

While some publishers offer manuscript transfer options within their journal portfolios, the accompanying peer reviews are often not reused. In a few cases, authors may revise and improve their manuscripts based on the feedback, which is at least a partial gain. However, more often than not, those reviews go unused. The current system can help count those reviews.

One study found that even with a rejection rate of just 50%, around 97% of papers eventually get published — typically by the fifth round, after four resubmissions. Even with a high rejection rate of 90%, about 41% of papers still make it to publication after four attempts.

This shows how much of peer review efforts get wasted — intellectual contributions are lost with every cycle, and the time of reviewers is repeatedly taken up by evaluating similar versions of the same manuscript.

Future possibilities

Peer review system is a pillar, if not the bedrock of science. It is important that it is not abused. Given the exponential rise in retracted articles and increasing use of AI for generating reviews, it becomes paramount that the reviews are more systematic and face greater scrutiny.

We hope that Nature and other publishers extend open peer review to all their journals, including mega journals like Scientific Reports. In an ideal world, the publishers should be able to coordinate amongst each other to improve the efficiency of reviews and keep a closer watch on bad actor reviewers using the proposed framework.

We welcome any feedback and potential pitfalls of the proposed system that we might have overlooked.

Editors

Ludo Waltman
Editor-in-Chief

Ludo Waltman
Handling Editor

Editorial Assessment

by Ludo Waltman

DOI: 10.70744/MetaROR.178.1.ea

This short contribution proposes an approach to open peer review in which review reports are published openly and reviewers are pseudo-anonymous, meaning that readers of a review report do not know the identity of the reviewer while the publisher does know the reviewer’s identity. The contribution has been reviewed by two reviewers. Reviewer 2 considers the proposed approach to be good in theory, but the reviewer argues the approach will work only if it is combined with an approach for identity verification. Reviewer 1 argues that an alternative approach to pseudo-anonymous peer review, in which researchers can build a public profile as trustworthy reviewers, has benefits over the approach proposed by the authors. The reviewer also encourages the authors to further elaborate their arguments and to clarify a number of points that the reviewer finds difficult to understand.

Recommendations for enhanced transparency

  • Add a Data Availability Statement to report that no data are used.

  • Add author ORCID IDs.

  • Add an author contribution statement. The use of the CRediT taxonomy for reporting author contributions is encouraged.

  • Add a competing interest statement. Authors should report all competing interests, including not only financial interests, but any role, relationship, or commitment of an author that presents an actual or perceived threat to the integrity or independence of the research presented in the article. If no competing interests exist, authors should explicitly state this.

  • Add a funding source statement. Authors should report all funding in support of the research presented in the article. Grant reference numbers should be included. If no funding sources exist, explicitly state this in the article.

Competing interests: None.

Peer Review 1

Daniela Saderi

DOI: 10.70744/MetaROR.178.1.rv1

Summary

This preprint reads as an opinion piece in which the authors discuss the potential benefits and what they see as potential issues of introducing a specific kind of pseudoanonymity to existing processes of open peer review. The authors argue that their suggested form of pseudoanonymity—which envisions a system in which publishers would maintain a secured database connecting each review to a pseudonym associated with a specific person for each review—paired with the open publication of peer review reports may help “improve the trust in science even further as well as help catch lazy or ill-intentioned reviewers using the peer-review system to game citations or increase their influence by reviewing prolifically.” While I completely support efforts that put forward alternative solutions to the existing way scholarly peer review is conducted with the goal of improving it, I believe this piece would benefit from a more in-depth and clearer argumentation of all points raised by the authors, as well as the exposure of potential unintended consequences of the suggested model. Below I list major and minor concerns—a categorization that is completely subjective but I use with the intent of separating issues I believe need the most attention from those that may be quicker to fix and/or be deprioritized in the context of revising the work for a future version of the article.

Major concerns and feedback

  • The article would benefit from a more in-depth elaboration of all the arguments raised. For example, the authors may consider adding an introduction in which they more extensively articulate the issues they think would be mitigated by the implementation of their pseudoanonymity model.

    While the authors do list specific benefits of pseudoanonymity, I’d say that overall it seems like they suggest it as a model to prevent “bad actors” from abusing their power as reviewers. I would love for the authors to think about how pseudoanonymity may help mitigate bias in peer review, as well as helping a reviewer engage with open peer review in a safer albeit rewarding way.

  • In my opinion, the way the authors propose pseudoanonymity to be implemented would prevent some of the most important benefits of pseudoanonymity. If I understood it correctly, the authors are suggesting that only the publisher would have access not only to the identity of the reviewer but also to their pseudonym in relation to the reviews they wrote. This model would prevent an important aspect of pseudoanonymity, which is the opportunity for reviewers to build a public profile as trustworthy peer reviewers using a unique pseudonym that is the same across reviews and is revealed to the readers. For example, at PREreview, the organization I co-founded and for which I’m the Executive Director, users (aka, PREreviewers) have to register through their ORCID iD and ,in addition to their public profile associated with their ORCID record, they receive a unique pseudonym (in our case, a random color and a random animal). This allows PREreviewers to use their pseudonym to publish their preprint reviews (they can choose which profile to use on a per-review basis) if they don’t feel quite ready to contribute with their name, maybe because they fear retaliation, or feel like revealing their name could harm them in some other way. One can imagine, for instance, an early-career researcher who contributes a number of preprint reviews using their unique pseudonym, let’s say “Pink Jaguar”. Nobody knows who Pink Jaguar is, but readers of preprint reviews can decide that Pink Jaguar is a trustworthy, constructive peer reviewer who is active within the community. The person with the Pink Jaguar account may decide to reveal their pseudonym to a trusted group of colleagues, or to the wider community whenever they feel ready. At the same time, if Pink Jaguar has any malicious intentions—for example starts targeting competitors with consistently negative and unconstructive reviews, or reviews their own work or the work of their friends with a consistently positive pattern—their identity can be uncovered by the PREreview Safety Team who can and will investigate and enforce the Code of Conduct, possibly leading to the termination of this person’s account and ability to interact in PREreview’s public spaces. It would be interesting to think about how such a system may apply to journal peer review publication. I can see how this may also help with reducing editors’ bias in selecting who is an expert as they would be able to select reviewers they don’t know by name, but they may trust based on the review history they built.

    The authors state that pseudoanonymity would “help provide useful analytics for both editors as well as readers and help them take more informed decisions.” But I’m unclear about how readers would be able to make “more informed decisions” as to them the reviews would still be completely anonymous. Also what are examples of such “informed decisions”? I may have misunderstood this point, but I invite the authors to further elaborate on this point as I believe it’s an important one.

  • When explaining how pseudoanonymity may work, the authors provide an example of the reviewer using a pseudonym like “Jamie.” While I don’t want to get too hang up on the exact example used to make their point, I think it’s worth mentioning that for all the benefits of pseudoanonymity to work—some of which may not be possible in the context of the model the authors are presenting as argued above, I believe that the pseudonym should be unique to the reviewer and can’t be just a name—this point ties up to the argument above and our choice at PREreview about how to implement pseudoanonymity.

  • I’m unclear about what the authors mean by “Increase fairness by normalizing the reviews.” I am not familiar with all publishers’ editorial processes, but presumably different publishers ask reviewers to use different scoring systems if any at all. What exactly would scoring and averaging the scoring a reviewer gives to manuscripts help with and how would that standardize across publishers? Is this related to the concept I tried to elaborate in a previous point where pseudonymity can help identify patterns of reviewing practices that can inform publishers and the wider community about a specific reviewer’s attitudes/style towards writing a review (e.g., “Pink Jaguar consistently writes thorough, constructive, and clear reviews, therefore I (the editor) will invite them to review more or therefore I (a reader) am inclined to trust their reviews”, versus “Pink Jaguar consistently writes negative, unconstructive, and unclear reviews, therefore I (the editor) will not invite them to review anymore or therefore I (a reader) am inclined to not trust their reviews as much” ).

  • Another point I’m unclear about is how the authors imagine pseudoanonymity may “Provide incentives to reviewers by giving them credit.” It’d be curious to know what kind of credit the authors are thinking about; is it monetary? Publishing tokens?

  • I’m also unclear about the argument put forward in the section titled “Lost Opportunity: The Invisible cost of lost Peer Review” and how it relates to the pseudoanonymity model they propose. I can make some guesses, but I would like it if the authors made that connection explicit.

  • I was unable to click on the hyperlinks provided throughout the article. The authors should consider listing all the references at the bottom of the preprint as opposed to or in addition to embedding hyperlinks in the text of the pdf. This would greatly help readers dive deeper into the argument and make the article more accessible to screen readers.

Minor concerns and feedback

  • I recommend not using the term “blind” to refer to peer review in which the reviewer’s identity is not known. In order to avoid ableist terms, it would be appropriate to adopt terms such as “double-anonymized” or “single-anonymized” or simply, “anonymized” peer review. As it’s been articulated here and elsewhere, the term ““Blind” in this context arises from and reinforces stereotypical understandings of disability which negatively impact blind and low vision people (see for example the discussion from the Royal Society of Chemistry and the American Philosophical Association (APA)).”

  • Please consider using the “they/them” pronouns when referring to the reviewer or the editor or any person. The authors start by writing “he/she” but then default to the “he” pronoun. The use of they/them helps maintain neutrality of gender identity and helps combat the stereotyping of expertise around a specific gender.

  • The authors suggest a “centralized database” to coordinate across publishers on incentives and the sharing of reviewers across publishers. While the authors mention that this would increase the risk of data leaks, they should also consider other negative aspects of such a system, like for example the potential for editors and other publishing staff to reveal the pseudonym of a reviewer. What systems would be in place to prevent such behavior? At the root of the problem of many issues in scholarly evaluation are not simply issues that can be solved with better technology and better “policing”, but are systemic issues of power imbalances, toxic competitive environments shaped by ill-placed reward systems and engrained biases common to human societies. Can we build systems that intentionally mitigate these issues and center care and reduce harm? I’m curious to know if authors have thought about some of these aspects and if they think there are ways the suggested model may help shift culture and combat oppressive systems more intentionally.

  • The statement that “Peer review system is a pillar, if not the bedrock of science.” is quite a strong one and not universally accepted. I’m not suggesting this article takes on arguing the very existence of peer review as we know it, but it may be helpful to mention that not everyone thinks that way and perhaps add a few more citations throughout the piece.

  • The preprint would benefit from some minor copy editing to enhance clarity of the arguments.

Competing interests: Dr. Daniela Saderi is the Co-founder and Executive Director of PREreview, an open preprint review platform that has supported the implementation of pseudonyms since its inception.

Peer Review 2

Kim Eggleton

DOI: 10.70744/MetaROR.178.1.rv2

The idea is a good one in theory, and I absolutely agree with all the points raised about lack of anonymity being a barrier to open peer review. Reviewers do want to stay anonymous. Readers and the public want peer review to be more transparent.

I wonder if ORCID could be a potential partner for this idea, given the role that system already plays in collating a database of unique actors and allows for recording of reviewer activity?

I do also think however, to really guard against the “bad actors” you describe, we need identity verification. Without it, bad actors can and will manipulate the system by using different accounts and names – people have multiple ORCIDs for example. I think the arguments in this paper could be made stronger (and yes, the solution more difficult to implement) by including identity verification as an important part of this puzzle. With identity verification the Publishers could all share this unique database (potentially ORCID, but doesn’t have to be) and we could really start to get some incredibly valuable data about peer review and the people that perform it. The publisher I work for already has all this information and we can use it to see who are great reviewers and reward them for that activity – it would be really powerful to be able to share this (safely, and legally) across the wider industry.

Competing interests: The reviewer works for a publisher.

Leave a comment