Published at MetaROR
March 11, 2025
Table of contents
Drinking from the firehose? Write more and publish less
Christopher Steven Marcum
Originally published on August 27, 2024 at:
Editors
Ludo Waltman
Ludo Waltman
Editorial assessment
by Ludo Waltman
In this blog post the author argues that problematic incentive structures have led to a rapid increase in the publication of low-quality research articles and that stakeholders need to work together to reform incentive structures. The blog post has been reviewed by three reviewers. Reviewer 3 considers the blog post to be a ‘great piece’, and Reviewer 1 finds it ‘compellingly written and thought provoking’. According to Reviewer 2, the blog post does not offer significant new insights for readers already familiar with the topic. All three reviewers provide recommendations for clarifications. Reviewers 2 and 3 also suggest the blog post could be more critical toward publishers. Reviewers 1 and 3 suggest taking a broader perspective on incentives, for instance by also considering incentives related to teaching and admin or incentives for funders, libraries, and other organizations.
Peer review 1
Summary of the essay
In this essay, the author seeks to explain the ‘firehose’ problem in academic research, namely the rapid growth in the number of articles but also the seemingly concurrent decline in quality. The explanation, he concludes, lies in the ‘superstructure’ of misaligned incentives and feedback loops that primarily drive publisher and researcher behaviour, with the current publish or perish evaluation system at the core. On the publisher side, these include commercial incentives driving both higher acceptance rates in existing journals and the launch of new journals with higher acceptance rates. At the same time, publishers seek to retain reputational currency by maintaining consistency and therefore brand power of scarcer, legacy-prestige journals. The emergence of journal cascades (automatic referrals from one journal to another journal within the same publisher) and the introduction of APCs (especially for special issues) also contribute to commercial incentives driving article growth. On the researcher side, he argues that there is an apparent demand from researchers for more publishing outlets and simultaneous salami slicing by researchers because authors feel they have to distribute relatively more publications among journals that are perceived to be of lower quality (higher acceptance rates) in order to gain equivalent prestige to that of a higher impact paper. The state of peer review also impacts the firehose. The drain of PhD qualified scientists out of academia, compounded by a lack of recognition for peer review, further contributes to the firehose problem because there are insufficient reviewers in the system, especially for legitimate journals. Moreover, what peer review is done is no guarantee of quality (in highly selective journals as well as ‘predatory’). One of his conclusions is that there is not just a crisis in scholarly publishing but in peer review specifically and it is this crisis that will undermine science the most. Add AI into the mix of this publish or perish culture, and he predicts the firehose will burst.
He suggests that the solution lies in researchers taking back power themselves by writing more but ‘publishing’ less. By writing more he means outputs beyond traditional journal publications such as policy briefs, blogs, preprints, data, code and so on, and that these should count as much as peer-reviewed publications. He places special emphasis on the potential role of preprints and on open and more collegiate preprint review acting as a filter upstream of the publishing firehouse. He ends with a call for more collegiality across all stakeholders to align the incentives and thus alleviate the pressure causing the firehose in the first place.
General Comment
I enjoyed reading the essay and think the author does a good job of exposing multiple incentives and competing interests in the system. Although discussion of perverse incentives has been raised in many articles and blog posts, the author specifically focuses on some of the key commercial drivers impacting publishing and the responses of researchers to those drivers. I found the essay compellingly written and thought provoking although it took me a while to work through the various layers of incentives. In general, I agree with the incentives and drivers he has identified and especially his call for stakeholders to avoid polarization and work together to repair the system. Although I appreciate the need to have a focused argument I did miss a more in-depth discussion about the equally complex layers of incentives for institutions, funders and other organisations (such as Clarivate) that also feed the firehose.
I note that my perspective comes from a position of being deeply embedded in publishing for most of my career. This will have also impacted what I took away from the essay and the focus of my comments below.
Main comments
-
I especially liked the idea of a ‘superstructure’ of incentives as I think that gives a sense of the size and complexity of the problem. At the same time, by focusing on publisher incentives and researchers’ response to them he has missed out important parts of the superstructure contributing to the firehose, namely the role of institutions and funders in the system. Although this is implicit, I think it would have been worth noting more, in particular:
-
He mentions institutions and the role of tenure and promotion towards the end but not the extent of the immense and immobilizing power this wields across the system (despite initiatives such as DORA and CoARA).
-
Most review panels (researchers) assessing grants for funders are also still using journal publications as a proxy for quality, even if the funder policy states journal name and rank should not be used
-
Many Institutions/Universities still rely on number and venue of publications. Although some notable institutions are moving away from this, the impact factor/journal rank is still largely relied on. This seems especially the case in China and India for example, which has shown a huge growth in research output. Although the author discusses the firehose, it would have been interesting to see a regional breakdown of this.
-
Libraries also often negotiate with publishers based on volume of articles – i.e they want evidence that they are getting more articles as they renegotiate a specific contract (e.g. Transformative agreements), rather than e.g. also considering the quality of service.
-
Institutions are also driven by rankings in a parallel way to researchers being assessed based on journal rank (or impact factor). How University Rankings are calculated is also often opaque (apart from the Leiden rankings) but publications form a core part. This further incentivises institutions to select researchers/faculty based on the number and venue of their publications in order to promote their own position in the rankings (and obtain funding)
-
-
The essay is also about power dynamics and where power in the system lies. The implication in the essay is that power lies with the publishers and this can be taken back by researchers. Publishers do have power, especially those in possession of high prestige journals and yet publishers are also subject to the power of other parts of the system, such as funder and institutional evaluation policies. Crucially, other infrastructure organisations, such as Clarivate, that provide indexing services and citation metrics also exert a strong controlling force on the system, for example:
-
Only a subset of journals are ever indexed by Clarivate. And funders and Institutions also use the indexing status of a journal as a proxy of quality. A huge number of journals are thus excluded from the evaluation system (primarily in the arts and humanities but also many scholar-led journals from low and middle income countries and also new journals). This further exacerbates the firehose problem because researchers often target only indexed journals. I’d be interested to see if the firehose problem also exists in journals that are not traditionally indexed (although appreciate this is also likely to be skewed by discipline)
-
Indexers also take on the role of arbiters of journal quality and can choose to delist or list journals accordingly. Listing or delisting has a huge impact on the submission rates to journals that can be worth millions of dollars to a publisher, but it is often unclear how quality is assessed and there seems to be a large variance in who gets listed or not.
-
Clarivate are also paid large fees by publishers to use their products, which creates a potential conflict of interest for the indexer as delisting journals from major publishers could potentially cause a substantial loss of revenue if they withdraw their fees. Also Clarivate relies on publishers to create the journals on which their products are based which may also create a conflict if Clarivate wishes to retain the in-principle support of those publishers.
-
The delisting of elife recently, even though it is an innovator and of established quality, shows the precariousness of journal indexing.
-
-
All the stakeholders in the system seem to be essentially ‘following the money’ in one way or another – it’s just that the currency for researchers, institutions, publishers and others varies. Publishers – both commercial and indeed most not-for profit – follow the requirements of the majority of their ‘customers’ (and that’s what authors, institutions, subscribers etc are in this system) in order to ensure both sustainability and revenue growth. This may be a legacy of the commercialisation of research in the 20th Century but we should not be surprised that growth is a key objective for any company. It is likely that commercial players will continue to play an important role in science and science communication; what needs to be changed are the requirements of the customers.
-
The root of the problem, as the author notes, is what is valued in the system, which is still largely journal publications. The author’s solution is for researchers to write more – and for value to be placed on this greater range of outputs by all stakeholders. I agree with this sentiment – I am an ardent advocate for Open Science. And yet, I also think the focus on outputs per se and not practice or services is always going to lead to the system being gamed in some way in order to increase the net worth of a specific actor in the system. Preprints and preprint review itself could be subject to such gaming if value is placed on e.g. the preprint server or the preprint-review platform as a proxy of preprint and then researcher quality.
-
I think the only way to start to change the system is to start placing much more value on both the practices of researchers (as well as outputs) and on the services provided by publishers. Of course saying this is much easier than implementing it.
Other comments
-
A key argument is that higher acceptance rates actually create a perverse incentive for researchers to submit as many manuscripts as possible because they are more likely to get accepted in journals with higher acceptance rates. I disagree that higher acceptance rates per se are the main incentive for researchers to publish more. More powerful is the fact that those responsible for grants and promotion continue to use quantity of journal articles as a proxy for research quality.
-
Higher acceptance rates are not necessarily an indicator of low quality or a bad thing if it means that null, negative and inconclusive results are also published
-
The author states that Journal Impact Factors might have been an effective measure of quality in the past. I take issue with this because the JIF has, as far as I know, always been driven by relatively few outliers (papers with very high citations) and I don’t know of evidence to show that this wasn’t also true in the past. It also makes the assumption that citations = quality.
-
The author asks at one point “Why would field specialization need a lower threshold for publication if the merits of peer review are constant? ” I can see a case for lower thresholds, however, when the purpose of peer review is primarily to select for high impact, rather than rigour, of the science conducted. A similar case might be made for multidisciplinary research, where peer reviewers tend to assess an article from their discipline’s perspective and reject it because the part that is relevant to them is not interesting enough… Of course, this all points to the inherent problems with peer review (with which I agree with the author)
-
The author puts his essay in appropriate context, drawing on a range of sources to support his argument. I particularly like that he tried to find source material that was openly available.
-
He cites 2 papers by Bjoern Brembs to substantiate the claim that there is potentially poorer review in higher prestige journals than in lower ranked journals. These papers were published in 2013 and 2018 and the conclusions relied, in part, on the fact that higher ranked journals had more retractions. Apart from a potential reporting bias, given the flood of retractions across multiple journals in more recent years, I doubt this correlation now exists?
-
The author works out submission rates from the published acceptance rates of journals. The author acknowledges this is only approximate and discusses several factors that could inflate or deflate it. I can add a few more variables that could impact the estimate, including: 1) the number of articles a publisher/journal rejects before articles are assigned to any editor (e.g. because of plagiarism, reporting issues or other research integrity issues), 2) the extent to which articles are triaged and rejected by editors before peer review (e.g. because it is out of scope or not sufficiently interesting to peer review); the number of articles rejected after peer review; and 4) the extent to which authors independently withdraw an article at any stage of the process. When publishers publish acceptance rates, they don’t make it clear what goes into the numerator or the denominator and there are no community standards around this. The author rightly notes this process is too opaque.
Catriona J. MacCallum
As is my practice, I do not wish to remain anonymous. Please also note that I work for a large commercial publisher and am writing this review in an independent capacity such that this review reflects my own opinion, which are not necessarily those of my employer.
Peer review 2
This is a well written and clear enough piece that may be helpful for a reader new to the topic. To people familiar with the field there is not so much which is new here. The final recommendation is not well expressed. As currently put it is, I think, wrong. But it is a provocative idea. I comment section by section below.
The first paragraphs repeat well established facts that there are too many papers. Seppelt et al’s contribution is missing here. It also reproduces the disengenuous claim, by a publisher’s employee, that publishers ‘only’ respond to demand. I do not think that is true. They create demand. They encourage authors to write and submit papers, as anyone who has been emailed by MDPI recently can testify. Why repeat something which is so inaccuate?
The section on ‘upstream of the nozzle’ is rather confusing. I think the author is trying to establish if more work is being submitted. But this cannot be deduced from the data presented. No trends are given. Rejection rates will be a poor guide if the same paper is being rejected by several journals. I was also confused by the sources used to track growth in papers – why not just use Dimensions data? The final paragraph again repeats well known facts about the proliferation of outlets and salami slicing. Thus far the article has not introduced new arguments.
Minor points in this section:
-
there are some unsupported claims. Eg ‘This is a practice that is often couched within the seemingly innocuous guise of field specialty journals.’
-
I also do not understand the logic of this rather long sentence: ‘The expansion of journals with higher acceptance rates alters the rational calculus for researchers – all things being equal higher acceptance rates create a perverse incentive to submit as many manuscripts as possible since the underlying probability of acceptance is simply higher than if those same publications were submitted to a journal with a lower acceptance rate, and hence higher prestige.’ I suggest it be rephrased
The section on peer review (Who’s testing the water) is mostly a useful review of the issues. But there are some problems which need addressing. Bizarrely, when discussing whether there enough scientists, it fails to mention Hanson et al’s global study, despite linking to it’s preprint in the opening lines. Instead the author adopts a parochial North American approach and refers only to PhDs coming from the US. It is not adequate to take trends in one country to cannot explain an international publishing scene. These are not the ‘good data’ the author claims. Likewise the value of data on doctorates not going onto a post-doc hinges on how many post-docs there are. That trend is not supplied. This statement ‘Almost everyone getting a doctorate goes into a non-university position after graduation’ may be true, but no supporting data are supplied to justify it. Nor do we know what country, or countries, the author is referring to.
The section ‘A Sip from the Spring’ makes the mistaken claim that researchers hold market power. This is not true. Researchers institutions, their libraries and governments are the main source of publisher income. It is here that the key proposal for improvement is made: researcher can write more and publish less. But if the problem is that there is too much poorly reviewed literature then this cannot be the solution. Removing all peer review, would mean there is even more material to read whose appearance is not slowed up by peer review at all. If peer review is becoming inadequate, evading it entirely is hardly a solution.
This does not mean we should not release pre-prints. The author is right to advocate them, but the author is mistaken to think that this will reduce publishing pressures. The clue is in their name ‘pre-print’. Publication is intended.
Missing from the author’s argument is recognition of the important role that communities of researchers form, and the roles that journals play in providing venues for conversation, disagreement and disucssion. They provide a filter. Yes researchers produce other material than publications as the author states: ‘grant proposals, editorials, policy briefs, blog posts, teaching curricula and lectures, software code and documentation, dataset curation, and labnotes and codebooks.’ I would add email and whatsapp messages to that list. But adding all that to our reading lists will not reduce the volume of things to be read. It must increase it. And it would make it harder to marshall and search all those words.
But the idea is provocative nonetheless. Running through this paper, and occasionally made explicit, is the fact that publishers earn billions from their ‘service’ to academia. They have a strong commercial interest in our publishing more, and in competing with each other to produce a larger share of the market. If writing more, and publishing less, means we need to find ways of directing our thoughts so that they earn less money for publishers, then that could bring real change to the system.
A minor point: the fire hose analogy is fully exploited and rather laboured in this paper. But it is a North American term and image, that does not travel so easily.
Peer review 3
This op-ed addresses the issue with the exponential increase in publications and how this is leading to a lower quality of peer review which, in turn, is resulting in more bad science being published. It is a well-written article that tackles a seemingly eternal topic. This piece focussed more on the positives and potential actions which is nice to see as this is a topic that can become stuck in the problems. There are places throughout that would benefit from more clarity and at times there appears to be a bias towards publishers, almost placing blame on researchers. Very simple word changes or headings could immediately resolve any doubt here as I don’t believe this is the intention of the article at all.
Additionally, this article is very focussed on peer review (a positive) but I think that it would benefit from small additions throughout that zoom out from this and place the discussion in the context of the wider issues – for example you cannot change peer review incentives without changing the entire incentives around “service” activities including teaching, admin etc. This occurs to a degree with the discussion on other outputs, including preprints and data. Moreover, when discussing service type activities, there is data that reveals certain demographics deliberately avoid this work. Adding this element into the article would provide a much stronger argument for change (and do some good in the new current political climate).
Overall, I thought this was a great piece when it was first posted online and does exactly what a good op-ed should – provoke thought and discussion. Below are some specific comments, in reading order. I do not believe that there are any substantial or essential changes required, particularly given that this is an op-ed article.
—–
Quote: “Academia is undergoing a rapid transformation characterized by exponential growth of scholarly outputs.”
Comment: There’s an excellent paper providing evidence to this: https://direct.mit.edu/qss/article/5/4/823/124269/The-strain-on-scientific-publishing which would be a very positive addition
Quote: “it’s challenging to keep up with the volume at which research publications are produced”
Comment: Might be nice to add that this was a complaint dating back since almost the beginning of sharing research via print media, just to reinforce that this is a very old point.
Quote: “submissions of poor-quality manuscripts”
Comment: The use of “poor quality” here is unnecessary. Just because a submission is not accepted, it has no reflection on “quality”. As such this does seem to needlessly diminish work rejected by one journal
Quote: “Maybe there are too many poor quality journals too – responding to an underlying demand to publish low quality papers.”
Comment: This misses the flip side – poor quality journals encourage and actively drive low quality & outright fraudulent submissions due to the publisher dominance in the assessment of research and academics.
Quote: “even after accounting for quality,”
Comment: Quality is mentioned here but has yet to be clearly defined. What is “quality”? – how many articles a journal publishes? The “prestige” of a journal? How many people are citing the articles?
Quote: “Researchers can – and do – respond to the availability by slicing up their work (and their data) into minimally publishable units”
Comment: I fully agree that some researchers do exactly this. However, again, this seems to be blaming researchers for creating this firehose problem. I think this point could be reworded to not place so much blame or be substantiated with evidence that this is a widespread practice – my experience has been very mixed in that I’ve worked for people who do this almost to the extreme (and have very high self-citations) and also worked for people who focus on the science and making it as high quality and robust as possible. I agree many respond to the explosion of journals and varied quality in a negative manner but the journals, not researchers are the drivers here.
Quote: “least important aspect of the expected contributions of scholars.”
Comment: I think it may be worth highlighting here that sometimes specific demographics (white males) actively avoid these kinds of service activities – there’s a good study on this providing data in support of this. It adds an extra dimension into the argument for appropriate incentives and the importance & challenges of addressing this.
Quote: “high quality peer review”
Comment: Just another comment on the use of “quality’. This is not defined and I think when discussing these topics it is vital to be clear what one means by “high quality”. For example, a high quality peer review that is designed as quality control would be detecting gross defects and fraud, preventing such work from being published (peer review does not reliably achieve this). In contrast, a high quality peer review designed to help authors improve their work and avoid hyperbole would be very detailed and collegial, not requesting large numbers of additional experiments.
Quote: “conferring public trust in the oversight of science”
Comment: I’m not convinced of this. Conveying peer review as a stamp of approval or QC leads to reduced trust when regular examples emerge with peer review failures – just look at Hydroxychloroquine and how peer review was used to justify that during COVID or the MMR/autism issues that are still on-going even after the work was retracted. I think this should be much more carefully worded, removed or expanded on to provide this perspective – this occurs slightly in the following sentence but it is very important to be clear on this point.
Quote: “Researchers hold an incredible amount of market power in scholarly publishing”
Comment: I like the next few paragraphs but, again, this seems to be blaming researchers when they in fact hold no/little power. I agree that researchers *could* use market pressure but this is entirely unrealistic when their careers depend on publishing X papers in X journal. An argument as to why science feels increasingly non-collaborative perhaps. Funders can have immediate and significant changes. Institutions adopting reward structures, such as teaching for example, would have significant impacts on researcher behaviour. Researchers are adapting to the demands the publication system creates – more journals, greater quantity and reduced quality whilst maintaining control over the assessment – eLife being removed from Wos/Scopus is a prime example of publishers (via their parent companies) preventing innovation or even rather basic improvements.
Quote: “With preprint review, authors participate in a system that views peer review not as a gatekeeping hurdle to overcome to reach publication but as a participatory exercise to improve scholarship.”
Comment: This is framing that I really like; improving scholarship, not quality control.
Quote: “buy”
Comment: typo
Quote: “adoption of preprint review can shift the inaccurate belief that all preprints lack review”
Comment: Is this the right direction for preprints though? If we force all preprints to be reviewed and only value reviewed-preprints, then we effectively dismantle the benefits of preprints and their potential that we’ve been working so hard to build. A recent op-ed by Alice Fleerackers et al provided an excellent argument to this effect. More a question than a suggestion for anything to change.
Quote: “between all of those stakeholders to work together without polarization”
Comment: I disagree here – publishers have repeatedly shown that their only real interest is money. Working with them risks undermining all of the effort (financial, careers, reputation, time) that advocates for change put in. The OA movement should also highlight perfectly why this is such a bad route to go down (again). Publishers grip on preprint servers is a great example – those servers are hard to use as a reader, lack APIs and access to data, are not innovative or interacting with independent services. The community should make the rules and then publishers abide by and within them. Currently the publishers make all of the rules and dominate. Indeed, this is possibly the biggest ommision from this article – the total dominance of publishers across the entire ecosystem. You can’t talk about change without highlighting that the publishers don’t just own journals but the reference managers, the assessment systems, the databases etc. I may be an outlier on this point but for all of the people I interact with (often those at the bottom of the ladder) this is a strong feeling. Again, not a suggestion for anything to change and indeed the point of an op-ed is to stimulate thought and discussion so dissent is positive.
Note that these annotations were made in hypothes.is and are available here, linked in-text for ease – comments are duplicated in this review.
Author response
A few months back, Upstream editor Martin Fenner suggested that I submit my Upstream blog post titled, Drinking from the Firehose? Write More and Publish Less, for peer-review as a sort of experiment for Upstream through MetaROR. MetaROR, a relative newcomer to the scholarly communication community, provides the review and curate steps in the “publish-review-curate” model for meta-research.
While I do not consider myself a meta-researcher (scholars who conduct research on research) many of my positions on science policy have implications on the field (especially, those on transparency, openness, and reproducibility). I think the main call in my blog post for reform in scholarly communication – namely, to stop publishing in traditional journals as much and start rewarding a broader swath of scholarly activities like data sharing – is particularly appealing to meta-researchers who rely on non-publication outputs for their work. So, I submitted. The article was openly reviewed, and MetaROR provided an editorial assessment. Here, I reply to the reviewers and contribute to the curation of the original post.
The reviews are very high-quality – in fact, they are some of the most well-reasoned reviews I’ve received in the 20 years I’ve been a scholar. If MetaROR represents the future of peer-review through the publish-review-curate model, scholarly communication is about to get a whole lot better. You can read the open reviews of my blog post here. The revised version of the editorial is here.
Like traditional peer-review, each individual reviewer provided their feedback independently of the others and the handling editor did not curate the reviews. I prefer when editors do such curation since it helps to organize the response in a way that reduces redundancy. This is one of the main benefits of the group-based peer review systems – such as PREreview’s Live Review. Also, there was no easy way (or at least not an obvious one) to export the reviews in plaintext from MetaROR so I could respond point-by-point in software of my choice. Below is an attempt to organize my response roughly around the major criticisms and suggestions in the review. Because this was an opinion piece and not research, I’m not going to respond to every point anyway – nearly all of which I would accept and revise accordingly had this been a research article.
Too Easy on the Publishers, Too Hard on Researchers
All three reviewers expressed some dismay over how light my criticism of the publishers was in my blog piece. I do not disagree. The reviewers rightfully point out that the publishers play outsized role in the inequity created in the scholarly communication space. However, I am choosing not to revise here much as the essay was already too long – it would have taken a tome to articulate my criticism of the publishers. That’s out of scope. However, I revised the first paragraph in the conclusion to state:
The publishers are incentivized to avoid any other form of reform – this is the rational option that publishers choose in response to the apparent demand from researchers – as Ciavarella rightly pointed out.
Two of the reviewers also thought I was too harsh on researchers. I don’t think that I was overly harsh. All three agree with me that researchers have some market role here but disagree with the extent to which they can exert influence. One reviewer claims researchers have no market power (to which I respectfully disagree). I’ve clarified in the paper that: ‘the power any individual researcher has here is small. Collective action is needed.’ I reject that researchers are blameless for the status quo – complacency empowers the publishers. Unfortunately, it’s also baked into the superstructure of the reward system that is perpetuated by publisher-controlled market forces. I also added the following sentiment along these lines when discussing market-power of researchers:
It’s free to share and read research without the need for costly, anticompetitive gatekeeping. Leveraging that freedom is an untapped source of market power.
Focus More on Institutions and Funders and Communities
Two of the three reviewers thought I needed to draw more attention to the roles, demands, and influence that academic institutions, publisher consortia, libraries, indexing services, scholarly societies, and grassroots research organizations have in this ecosystem. I agree with all these points – and had Clarivate’s irresponsible delisting of eLife in the Web of Science happened before I wrote the original piece, I would have highlighted that as one reviewer suggested.
No New Arguments or Analysis
The reviewers felt that, while well-articulated, the arguments I was espousing are not novel. First, I think it is worthy to renew the idea that we should be more selective in choosing what to publish in journals. Focusing on quality over quantity and valuing activities beyond journal publications should be repeated often until it’s common practice.
One comment called for more data and analysis, and another wanted some additional research cited. I think that’s a great idea and I hope the reviewers can do that work or perhaps the open review will inspire others to do so.
In response to the criticism that preprints themselves both presuppose an eventual traditional publication and that they could be gamed, I revised that section accordingly:
There is risk of gaming preprints and preprint review just as there is in traditional publishing, such as by placing value on a paper for where it appears or how it was reviewed without considering its quality or contribution to science.
One reviewer misunderstood my point about preprints altogether:
Removing all peer review, would mean there is even more material to read whose appearance is not slowed up by peer review at all. If peer review is becoming inadequate, evading it entirely is hardly a solution. This does not mean we should not release pre-prints. The author is right to advocate them, but the author is mistaken to think that this will reduce publishing pressures. The clue is in their name ‘pre-print’. Publication is intended.
I am absolutely not arguing for tossing out peer review. I strongly believe peer review is valuable but currently broken. Moreover, I reject that peer review needs to happen behind the gatekeeping of publishers. I revised to clarify here and added a footnote based on this reviewer’s latter observation.
Peer-review remains a critical check for pollutants in the waters – but the prevailing model needs significant reform. The traditional opaque, uncompensated system has eroded the quality, transparency, timeliness, and appropriateness of peer review due to competing priorities and a lack of appropriately aligned incentive structures. Novel models of peer review including, publish-review-curate and preprint review, and compensated review – ideally all done transparently and with conflicts of interest declared out in the open. At the same time, not all manuscripts need review to have value and most preprints with value (even those with reviews) should not be published in journals.
New footnote: The term ‘preprint’ is evolving – what was once a moniker for non-peer reviewed manuscript intended to eventually become reviewed and published (or more likely, rejected) now scopes-in other forms including publish-curate-review and manuscripts with preprint reviews. A new labeling and metadata system is desperately needed to highlight the state of review of a particular manuscript in a record of versions. Version control systems and badging are ubiquitous in the open-source software community and could be easily adopted here.
Volume is Volume is Volume
Probably the most important critique among the set of reviews points out an apparent recursion in the logic of the thesis that I need to clarify: you can’t solve the firehose problem by writing more, as that just adds more volume to the flow. My revision to the conclusion clarifies my intent: what I’m proposing is to stop sending so many papers to journals for publication and to choose preprints more often for reading, reviewing, and writing. At the same time the system should, maintain or increase non-publication scholarly outputs and reward those too.
“Write-More” here is a placeholder for all the non-publication writing scholars do and should get credit for from their institutions and fields. Again, I happen to focus on writing because that’s what I care about in this editorial and it would take volumes to pontificate on all the other services and activities that happen within the academy that are not properly rewarded.
Summary
Having my blog post peer-reviewed through MetaROR was a positive experience and I recommend the service. However, my post was still just an editorial – my opinions and thoughts – not research. Had this been a research article, however, the reviews as presented would have been a very good roadmap to improving the paper. For MetaROR, I have two suggestions: 1) the editorial assessment could be improved by organizing the key points and 2) create a way to have all reviews downloadable in plaintext for ease of importing into an editor.
Acknowledgments
Special thanks are owed to the reviewers, Catriona MacCallum, Dan Brockington, and Jonny Coates, the MetaROR handling editor Ludo Waltman, and to Upstream Editor and Front Matter founder Martin Fenner for the crazy idea to peer-review a blog post.
Disclosure
The opinions expressed here are my own and may not represent those of my employer, my associates, or the reviewers. I have no conflicts of interest to disclose.
This author response was previously published on Upstream.


