Published at MetaROR

March 11, 2025

Table of contents

Cite this article as:

Marcum, C.S. (2024). Drinking from the Firehose? Write More and Publish Less. Upstream. https://doi.org/10.54900/r8zwg-62003

Drinking from the firehose? Write more and publish less

Christopher Steven Marcum

Originally published on August 27, 2024 at: 

Abstract

Academia is undergoing a rapid transformation characterized by exponential growth of scholarly outputs. This phenomenon, often termed the “firehose problem,” presents significant challenges for researchers, publishers, funders, policymakers, and institutions alike. Some of these challenges include stresses on the peer-review system, a lower capacity to stay abreast of the latest research, a shift in the value of quantity over quality of scholarship, and a divergence between the rewards and incentives for producing outputs that meet funder and societal expectations. In this essay, the implications of the firehose problem and potential approaches to resolving it through reform of incentives and rewards for publishing are explored.

The firehose

At the 2024 National Academies Workshop, Enhancing Public Access to the Results of Research Supported by the Department of Health and Human Services Tom Ciavarella of Frontiers re-raised an idea that periodically pops up in conversations about scholarly publishing: there are too many research articles being submitted for publication to journals – that is, we publish too much. This was in response to a researcher who complained about the firehose problem in academic publishing – it’s challenging to keep up with the volume at which research publications are produced. The researcher was, by implication, placing some blame for the firehouse on the publishers. Ciavarella noted that publishers only respond to demand – they build bigger pipes on which to fit bigger hoses; if there is too much research coming out of the publication firehose, it’s because there are too many submissions for publication, he claimed. The relationship between supply and demand in academic publishing is more complex than this analogy supposes. After all, the traditional currency that academic institutions use in exchange for tenure and promotion for their faculty is peer-reviewed publications – and the fiat currency of publishers is peer-reviewed publications. More than fifty years after Silen’s seminal editorial criticizing this currency, it’s still a publish or perish world.

Upstream of the nozzle

To assess Ciavarella’s claim, a reasonable estimate of the global number of article submissions to academic journals would be needed. Submissions represent the reservoir supplying the firehose. It is, however, incredibly difficult to get accurate data on the number of submissions to journals. Most publishers keep their data closed – a point that the Office of Science and Technology Policy respectfully glosses over in their Sisyphean reports to Congress on public access to federally funded research. Publishers do, however, advertise their journal “acceptance rates” in part because of a belief that such rates are inversely proportional to journal prestige – the more submissions a journal rejects relative to the number they accept, the more prestigious that journal claims to be along with other factors such as readership, citation rates, etc. Publishers translate that prestige into brand power – creating market forces that drive additional submissions from researchers eager to attach their names to recognizable brands.

In a preprint from a few years ago, Rachel Herbert of Elsevier’s International Center for the Study of Research (ICSR), evaluated the acceptance rates of over 2,000 journals (80% of them published by Elsevier) in 2017. The study found that the average acceptance rate was 32% with a range of 1.1% to 93.2% – this is similar to the rates found independently around the same time. Acceptance rates, of course, are a function of editorial policy that can be influenced by the publisher through a range of factors such as issue page number allotments, scope of the journal field, editorial prerogative, reviewer consensus on potential scientific importance of a submission, and the number of submissions sent into the journal. Leaving aside the fact that acceptance rates can be artificially deflated (and prestige factors artificially inflated) by simply increasing the number of submissions of poor-quality manuscripts, they can be used in conjunction with estimates of the total number of articles published to come up with a reasonable back-of-the-envelope estimate of the number of submissions.

According to Dimitri Curcic at WordsRated, since 1996 more than 64 million papers have been published in academic journals and there has been a 23% increase in growth in the last five years alone. During that time, the number of active journals has increased by more than 28% – outpacing the proliferation of articles (which suggests that the growth of these journals is somewhat indicative of having higher acceptance rates as there are more venues for articles that would have otherwise not been accepted for publications). These data also suggest that this relationship dampened in recent years, with the average annual growth rate of journals (1.67%, 2016-2020) being about a third of that of the number of published articles (5.28%, 2018-2022) in the last five years of available data for each.

The WordsRated data suggests that there were, conservatively, around 4 million articles published in 2017-the year of Herbert’s ICSR preprint (about 2.5 million of those were in science and engineering fields based on National Science Foundation estimates). Combining these data, we can reasonably estimate the range of articles submitted in 2017 at 12.5 million articles submitted globally – that’s almost 24 submissions per minute.[1] That is a lot of papers under consideration in the scholarly publishing market.

But is it too many submissions translating into too many papers coming out of the nozzle of the firehose? Because the firehose analogy is an oversimplification of a complex system we need to consider more factors than just the demand from researchers to have more outlets to publish in and the publishers’ response to supply those outlets. Whether this system has resulted in a surplus of published manuscripts – flooding the streets well after the fire is out – is a question about the quality of that surplus.

Is the water from the firehose safe to drink?

Few would argue that a surplus of high-quality, groundbreaking, innovative research is characteristic of a system producing too many papers. On the other hand, an excess of poor quality, low impact, and questionable research published in journals that are fit for that purpose, should[2] give pause to anyone drinking from the firehose. That is, perhaps it’s not just the excess number of papers that contributes to the surplus of frankly poor quality research published. Maybe there are too many poor quality journals too – responding to an underlying demand to publish low quality papers.

During the digital transformation to online publication, there was only one variable that the publishers needed to tune to satisfy this demand – increase the acceptance rate of existing journals. But doing that would have diminished the underlying value of their high prestige journals. So, instead, they tuned two variables by increasing the number of journals with higher acceptance rates in the system while protecting their big brands with constant rates. This is a practice that is often couched within the seemingly innocuous guise of field specialty journals. Why would field specialization need a lower threshold for publication if the merits of peer review are constant?

Here too, however, even after accounting for quality, the relationship between the number of submissions and the number of journals is more complex than a simple feedback loop. There is an underlying hidden dependence resulting from the very rational behavior of researchers contributing to this loop. The expansion of journals with higher acceptance rates alters the rational calculus for researchers – all things being equal higher acceptance rates create a perverse incentive to submit as many manuscripts as possible since the underlying probability of acceptance is simply higher than if those same publications were submitted to a journal with a lower acceptance rate, and hence higher prestige. Publishers often compound this incentive by offering automatic referrals for papers rejected from higher prestige journals to lower ones – allowing for almost instantaneous and friction-free forwarding of a rejected manuscript to another journal in their own portfolios (and certainly never to a competitor’s journal). This feedback loop is self-replicating and self-expanding without significant disruption.

The idea that there are too many journals accepting too many manuscripts for publication is not new. Christa Easton summarized the pressures that libraries faced sustaining subscriptions to the growing number of serial publications during the digital transformation in the 1990s while, simultaneously, publishers were faced with increased incentives to produce more journals. In 1997, the great migration to digital formats was only just beginning and came at significant risk and cost – now, it’s effectively effortless for a publisher to spin off a new online-only journal. Publishers – and not just experienced publishers but really anybody including scholarly societies and unaffiliated individuals – can create a new journal in minutes. Researchers can – and do – respond to the availability by slicing up their work (and their data) into minimally publishable units (sometimes called salami slicing) knowing that they have lower chances of contributing a single high-quality and holistic article to the scholarly record than they do contributing multiple smaller studies to a wider variety of field journals (which can contribute to self-citation and auto-plagiarism).

Who is testing the water?

Another argument is that the growth in the number of journals and the number of published research articles has been a response to scientific advances requiring increasingly specialized knowledge. The scientific publication model, and the incentives superstructure supporting researcher participation in it, is based upon the premise that science advances by taking baby steps – little discoveries, a single hypothesis falsified, a new data point, a novel method, another patent. And yet, there is very little evidence that this slow march of science correlates positively with the volume of published research. In fact, there is a countervailing theory, and some evidence to support it, that the overall quality of research is inversely proportional to the overall quantity of research or perhaps just a random consequence of the pool of available sources to cite.

Major breakthroughs, theories, and discoveries in science are rare and, relative to the volume of research over the last half-century, appear to be getting rarer. If journals provide value proportional to their acceptance rates (i.e., a component of their prestige by the publishers’ metrics), then one might assume that the quality of reviews and rigor of the underlying research must also be of significantly higher quality than less prestigious journals, right? Well, if that’s the case, then must it also be true that a less prestigious journal has a poorer review system allowing for lower quality research to be published? If publishers were building bigger pipes solely in response to the increase in the volume of submissions flowing through the firehose to altruistically advance science in proportion to its progress – and not to increase their revenues – they would have produced more journals with lower acceptance rates or improved the fittings and bushings connecting their pipes to the firehose by innovating with alternative business models (a few ideas that some good faith publishers have attempted to introduce into the system without much success in disruption include models such as compensating reviewers, charging non-refundable submission fees at the time of submission, etc). Two widespread and successful, if inequitable, innovations to come out of the digital transformation in the last 20 years include the invention of article processing charges (APCs) and journal impact factors (JIFs[3]). APCs support the surplus revenues of the journals and not the quality of the underlying research that they publish. This latter point was highlighted in the 2022 Budapest Open Access Initiative 20th anniversary statement: “[APCs] don’t pay for improved quality but for the perception of improved quality…Career advancement can depend on that perception. But that is a problem to solve, not an immutable reality to accommodate.” The JIFs were also called out by the statement as erroneously mistaking impact for quality and conflating journal with article impact. Even if the JIFs and their individual impact factor counterparts were once an effective measure of quality, they are unlikely to remain effective as measures in general.

If peer reviewers have a role in the firehose analogy, they are supposed to play the part of water quality control. However, across the academic publishing landscape, there is a mix of quality control through the peer review system – a mix that may be increasingly unreliable. It might seem intuitive to assume that, given the greater resources and credibility of higher prestige journals there’s a higher concentration of high quality rigorous reviews while the concentration of poor quality reviews is likely to pool in less prestigious outlets, but the evidence appears to be to the contrary. Of course, the worst of the worst falls within the predatory space where there’s a complete absence of review or disingenuous promises of quality rapid review. Sometimes this takes the form of otherwise legitimate journals allowing for greater tolerances to review processes to accelerate the publication of “special issues” (which sometimes come with the condition that authors have to pay APCs when they’re being invited by the journal to contribute – a suspect practice altogether). Certainly, no peer review system or peer reviewer pool is perfect and there are significant and often consequential lapses in scientific integrity or methodological concerns that escape even the most rigorous peer review and end up in a prestigious publication outlet.

The most egregious and sometimes hilarious lapses in the peer review system in high profile journals – some at the very apex of prestige – only demonstrate that there is a significant problem with peer review at a systemic level.

The conjecture advanced here, and by others, is that this problem is threefold:

  1. The demand for peer review is too high;

  2. The rewards for conducting peer review are too low;

  3. The resources required for rigorous peer review are insufficient for most peer review editorial systems.

The initial conditions of the first aspect of the peer review problem have already been implied here: the likelihood that there are too many papers and too many journals for the system to support. If demand for peer review is too high, it’s likely because of a deficit in the supply of qualified peers available to conduct review.

There’s good data from the National Science Foundation to support that claim. According to the latest report on the results of the Survey of Earned Doctorates by the NSF’s National Center for Science and Engineering Statistics, the number of earned doctorates in science and engineering fields has risen on average over time. The trend peaked in 2019 at 42,898 doctorates awarded before declining slightly during the pandemic years.

However, even as universities are awarding a greater number of doctorates over time, fewer and fewer are entering into academic jobs at universities where the vast majority of journals draw their editorial review boards. At the 2024 State of the Science Address, NASEM President Marcia McNutt called attention to the deficit in academic replacement. Just 36% of doctorate recipients not immediately going into a post-doctoral fellowship in 2021 reported that their first job out of graduate school would be in academia. That’s a significant decline from 48% of such in 2001. Now, it’s true that a greater number of doctorate recipients are entering into postdoctoral fellowships but the rate hasn’t kept up with the decline in academe overall – postdoctoral fellowships are not offsetting the fewer career positions filled in academia. Almost everyone getting a doctorate goes into a non-university position after graduation and for good reason: the academy cannot compete with the salary, benefits, and job security and stability that other sectors provide to newly minted PhDs. And when they get into those jobs, very few PhDs continue to publish in or review for scholarly outlets at high rates – some journals have strict review policies that do not allow non-academics or individuals who have not published recently to participate in peer review, which compounds the problem.

For those that remain in the academy, participation in peer review – ostensibly lauded as a critical component of the scientific process – is not given adequate recognition and reward by academic institutions. Peer review is often treated as service by tenure and promotion committees, which is the least important aspect of the expected contributions of scholars. Recognition mechanisms outside of the academy, and metadata infrastructure to support them, is increasingly becoming available[4] and yet the culture of the academy has been slow

to adopt these innovations.

Based on these trends, it’s very likely that the proliferation in the number of journals and articles rose concurrently with a decline in the availability of qualified reviewers to conduct robust, high quality peer review over the same time period. Conveniently, it appears that not only is there a demand to publish low quality research, there is a deficit in the quality of the peer review system overall anyway. Here too, artificial intelligence – even with its potential benefits to the peer review system (such as augmented literature search, detection of falsified data and images, and translation services) – is likely contributing to a downturn in thoughtful and rigorous review as generative AI can both do the reading and the writing on behalf of a human reviewer. Some publishers require reviewers to abstain from using those tools, even when they could be helpful. We can be certain that, given the unrewarded high demands on qualified human reviewers, they are indeed employing this tactic without disclosure.

All of this belies a significant crisis unfolding within the scholarly peer review system[5]. Legitimate journals are finding it increasingly difficult to solicit and retain high quality and timely responses to invitations for peer review. The pressures are perverse for potential reviewers. The motivations for contributing to peer review are almost entirely altruistic, save for the ability to stay abreast of potentially competing research – or, dreadfully, quell competing research from behind the veil of anonymity. There’s very little incentive provided by publishers, academic institutions, and scholarly societies for potential reviewers to contribute their valuable time in an increasingly productivity-constrained environment. For many journals, the time between the initial invitation and final decision is substantially delayed and editorial boards have to reach an increasing number of potential reviewers before finding someone available and willing to do reviews. Worse, there is very little done by editorial boards or journals to assess and report on the quality of reviews and to incentivize potential reviewers to maintain the scientific integrity of the reviews they provide. Authors are required to disclose conflicts of interest, reviewers however, rarely if ever are expected to do the same.

Still, peer review remains an incredibly valuable asset to the research enterprise. When done effectively and transparently, peer review provides significant benefits to science, including: improving the quality and rigor of works, grammatical and typesetting corrections, identifying theoretical gaps and insights, fostering collaborations, conferring public trust in the oversight of science, and more. But this crisis in peer review threatens all of its benefits. Without intervention and significant reform in the culture of scholarly publishing, it will be peer review – and not unsustainable publication business models – that undermines science the most. The firehose will continue to flow even as the water quality tests fail to pass potability standards.

Sip from the spring

Silen, and many others after, point out the unintended consequences the publish or perish culture in academia has created for the research community that have been revisited here: a proliferation of low-quality publications, slow review times-or false-promises for high-quality fast review-proliferation of predatory journals and paper mills, and more. With the recent emergence and widespread availability of generative artificial intelligence, all of these problems are accelerating in magnitude and velocity and will continue to put significant strain on the fittings and controls in this system – possibly until the hose bursts. As we enter a second digital transformation in scholarly writing, one characterized by demand for data and code sharing in publicly accessible repositories and the challenges and promises of AI[6], reform in infrastructure alone will be insufficient to support the forthcoming deluge – we need significant changes to the incentive structure for academic scholarship to lower the pressure, and increase the quality, of the water coming out of the firehose.

Researchers hold an incredible amount of market power in scholarly publishing – they drive both the supply of and demand for manuscripts. Researchers can, and should, leverage that power to challenge the status-quo and resolve the firehouse problem that they themselves decry. They are the source of the Pieran spring and everything downstream depends on its flow. One way to temper the firehouse would be to sip directly from the spring rather than from the nozzle.

Scholarly writing is much richer than just publications. For example, researchers produce grant proposals, editorials, policy briefs, blog posts, teaching curricula and lectures, software code and documentation, dataset curation, and labnotes and codebooks. Some of these scholarly outputs may end up being published – some may even end up changing how science is communicated and conducted. But, realistically, most will not obtain the recognition that the authors and contributions deserve for these “non-traditional” outputs. Much of these outputs hold incredible value to the scientific community and to the public. A singular focus on writing manuscripts to submit for publication lowers the likelihood that the value of these other materials can be realized.

By writing more and publishing less, researchers can lower the pressure of the firehose while continuing to make valuable contributions to the world. When academics write policy briefs that inform legislation, create a blog to enhance dialogue in their field, produce open data that is broadly reused, or write open software that enables a broad gold standard method to be widely available, they should be rewarded with value on par with any particular peer-reviewed publication in a journal. All of these materials fill the spring even if only published manuscripts filter through to the firehose.

The superstructure of incentives – predominantly those that provide credit for the purposes of tenure, promotion, and other career advances – should treat some combination of these outputs with parity to publications. Increasingly, funder policies require preprintsdata and code sharing, and other research outputs that they support with their grant money: some funders incentivize these requirements by rewarding compliance with parity to publications for the purposes of future grant review – the same should be true of home institutions that receive the funds that researchers are awarded and researchers should demand that the full cornucopia of the work supported by their grant is rewarded equitably for the purposes of performance, tenure, and promotion review. That is not to say that major breakthroughs and discoveries should not receive special attention – rather, it’s a proposal to amplify the entire portfolio of work that led to those discoveries.

Sharing ideas earlier with the community can greatly improve the quality of scholarship and broaden the impact and reach of those ideas. One way to accomplish this with research is by contributing to preprints and preprint reviews. Preprints, of course, can be added to a repository without ever undergoing review – that’s both an advantage and a disadvantage. On the one hand mass adoption of preprint use could shift the firehose problem to a problem of a poisoned spring with a large volume of unreviewed manuscripts filling the pool – on the other, preprints can be checked by many more potential reviewers and provide an avenue for sharing important results that may not otherwise get published in a journal (such as null-results). To help with the balance, preprint review provides an additional filter from the spring into the firehouse.

Preprint review is increasingly important as a mechanism to reform the peer review system – reform that’s happening in a manner that seeks to shift peer review from a monoculture maintained solely by publishers into an entire ecosystem largely maintained by researchers, their institutions, and their funders. With preprint review, authors participate in a system that views peer review not as a gatekeeping hurdle to overcome to reach publication but as a participatory exercise to improve scholarship.

Preprint review can also reveal potential errors and issues of scientific integrity earlier in the development of a manuscript so that authors can make more informed decisions about the state of their research and what should be done to address those issues in future revisions. Also, because preprint review is done out in the open, reviewers have an opportunity to interact with one another and to expose disagreements, highlight consistencies, and respond to ideas that ultimately allow authors to have a more holistic review. It’s also a way to ensure that authors can retain full control over their intellectual property and its derivatives by asserting licenses that fit with their personal needs and values (e.g., using a CC-BY-NC license if one does not want publishers to sell the content of a manuscript for use in AI, for instance).

The macro-effect of reforming peer review to include widespread use of preprint review would align nicely with a widely held philosophy of science that treats science communication to be a conversation rather than a broadcast. Models of preprint review are many, including relatively novel approaches such as PREreview’s live review where multiple reviewers collaborate to review a preprint in real time online or ASAPbio’s crowd preprint review done collaboratively buy asynchronously. Some journals, like eLife’s new model, already consider preprint review in their editorial pipelines – bypassing the need to solicit additional feedback and accelerating editorial decisions. Recently, The Gates Foundation refreshed their open access policy to require deposit of preprints by their grantees. Certainly, funders like Gates continue to value peer review and greater adoption of preprint review can shift the inaccurate belief that all preprints lack review (a belief codified in Gates’ required disclosure for researchers posting preprints).

The firehose problem in academic publishing is unlikely to be resolved by changing the pipe fittings alone. Of course, this is the rational option that publishers choose in response to the apparent demand from researchers – as Ciavarella rightly pointed out. The underlying demand, however, is fueled by a complex of misaligned and perverse incentives to publish or perish in academia. Reform at the fittings has only compounded the problem – increasing additional demand on an already stressed peer-review system and favoring quantity over quality of published manuscripts. A more holistic approach should focus repairs upstream, away from the nozzle, and to the source of knowledge that flows through the firehose itself – a spring filled with a variety of scholarly outputs beyond just manuscripts. Incentives for filling that spring should mesh with the rewards for publishing offered by funders, academic institutions, policymakers, publishers, and researchers themselves. This requires collegiality across and between all of those stakeholders to work together without polarization. In sum, alleviating the pressure coming out of the firehose is straightforward when the incentives are appropriately and collaboratively aligned: write more and publish less.

Acknowledgments

Special thanks are owed to Stuart Buck, Tom Ciavarella, Erin McKiernan, Peter Suber, and Crystal Tristch for their efforts in improving this work.

Disclosure

The opinions expressed here are my own and may not represent those of my employer, my position, or the reviewers. For full transparency: I am a member of the scientific advisory board of PREreview, which I cited here. I contributed – either as author or reviewer – to a few of the papers incorporated by reference above. I have been guest editor or associate editor on a number of special issues during my academic career though I have never participated in soliciting direct contributions to those issues. I have made every attempt at citing works that are publicly accessible – a few works may not be freely available to all readers.

Notes

  1. (4M publications / 0.32 acceptance rate) = 12.5M submissions; (12.5M submissions / 525,600 minutes in a year) = 24 submissions per minute. This should conservatively underestimate the true rate.︎

  2. The same argument about the balance of quantity over quality has also been made about books too.

  3. Pronounced like, but not to be confused with, the famous brand of peanut butter.

  4. A trusted colleague once warned: “be careful what you eat in the scholarly kitchen.” Thankfully, this article is good soup.

  5. This is the most comprehensive and up-to-date review of peer review currently available. The scope of the article demonstrates the value of peer review, its novelty as 20th Century practice, and the challenges that jeopardize its contemporary legitimacy in the 21st Century. It’s well worth a read and it is available open access for free. There is another recent great article about the crisis in peer-review by Colleen Flaherty behind a paywall here.

  6. There are new tools available to researchers for writing and conducting peer review, including emerging artificial intelligence tools.

References

  1. Patil, C., & Siegel, V. (2009). Drinking from the firehose of scientific publishing. Disease Models & Mechanisms2(3–4), 100–102. https://doi.org/10.1242/dmm.002758

  2. Hanson, M. A., Barreiro, P. G., Crosetto, P., & Brockington, D. (2023). The strain on scientific publishing (Version 2). arXiv. https://doi.org/10.48550/arxiv.2309.15884

  3. Jin, S. (2024). Should We Publish Fewer Papers? ACS Energy Letters9(8), 4196–4198. https://doi.org/10.1021/acsenergylett.4c01991

  4. Publish or perish: Origin and perceived benefits. (2018). In I. Moosa, Publish or Perish (pp. 1–17). Edward Elgar Publishing. https://doi.org/10.4337/9781786434937.00007

  5. Silen, W. (1971). Publish or Perish. Archives of Surgery103(1), 1. https://doi.org/10.1001/archsurg.1971.01350070027002

  6. Herbert, R. (2020). Accept Me, Accept Me Not: What Do Journal Acceptance Rates Really Mean? [ICSR Perspectives]https://doi.org/10.2139/ssrn.3526365

  7. Bjórk, B.-C. (2019). Acceptance rates of scholarly peer-reviewed journals: A literature survey. El Profesional de La Información28(4). https://doi.org/10.3145/epi.2019.jul.07

  8. Easton, C. (1997). Too many journals, in too many forms? Serials Review23(3), 64–68. https://doi.org/10.1080/00987913.1997.10764393

  9. Harvey, L. A. (2020). We need to value research quality more than quantity. Spinal Cord58(10), 1047–1047. https://doi.org/10.1038/s41393-020-00543-y

  10. Ioannidis, J. P. A. (2015). A generalized view of self-citation: Direct, co-author, collaborative, and coercive induced self-citation. Journal of Psychosomatic Research78(1), 7–11. https://doi.org/10.1016/j.jpsychores.2014.11.008

  11. Casadevall, A., & Fang, F. C. (2014). Specialized Science. Infection and Immunity82(4), 1355–1360. https://doi.org/10.1128/iai.01530-13

  12. Tumin, D., & Tobias, J. (2019). The peer review process. Saudi Journal of Anaesthesia13(5), 52. https://doi.org/10.4103/sja.SJA_544_18

  13. Michalska-Smith, M. J., & Allesina, S. (2017). And, not or: Quality, quantity in scientific publishing. PLOS ONE12(6), e0178074. https://doi.org/10.1371/journal.pone.0178074

  14. Park, M., Leahey, E., & Funk, R. J. (2023). Papers and patents are becoming less disruptive over time. Nature613(7942), 138–144. https://doi.org/10.1038/s41586-022-05543-x

  15. Avital, M. & Copenhagen Business School. (2024). Digital Transformation of Academic Publishing: A Call for the Decentralization and Democratization of Academic Journals. Journal of the Association for Information Systems25(1), 172–181. https://doi.org/10.17705/1jais.00873

  16. McKiernan, E. C., Schimanski, L. A., Muñoz Nieves, C., Matthias, L., Niles, M. T., & Alperin, J. P. (2019). Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations. eLife8, e47338. https://doi.org/10.7554/eLife.47338

  17. Fire, M., & Guestrin, C. (2019). Over-optimization of academic publishing metrics: Observing Goodhart’s Law in action. GigaScience8(6), giz053. https://doi.org/10.1093/gigascience/giz053

  18. Drozdz, J. A., & Ladomery, M. R. (2024). The Peer Review Process: Past, Present, and Future. British Journal of Biomedical Science81, 12054. https://doi.org/10.3389/bjbs.2024.12054

  19. Brembs, B. (2018). Prestigious Science Journals Struggle to Reach Even Average Reliability. Frontiers in Human Neuroscience12, 37. https://doi.org/10.3389/fnhum.2018.00037

  20. Brembs, B., Button, K., & Munafò, M. (2013). Deep impact: Unintended consequences of journal rank. Frontiers in Human Neuroscience7https://doi.org/10.3389/fnhum.2013.00291

  21. Elmore, S. A., & Weston, E. H. (2020). Predatory Journals: What They Are and How to Avoid Them. Toxicologic Pathology48(4), 607–610. https://doi.org/10.1177/0192623320920209

  22. Repiso, R., Segarra‐Saavedra, J., Hidalgo‐Marí, T., & Tur‐Viñes, V. (2021). The prevalence and impact of special issues in communications journals 2015–2019. Learned Publishing34(4), 593–601. https://doi.org/10.1002/leap.1406

  23. Schimanski, L. A., & Alperin, J. P. (2018). The evaluation of scholarship in academic promotion and tenure processes: Past, present, and future. F1000Research7, 1605. https://doi.org/10.12688/f1000research.16493.1

  24. Flanagin, A., Kendall-Taylor, J., & Bibbins-Domingo, K. (2023). Guidance for Authors, Peer Reviewers, and Editors on Use of AI, Language Models, and Chatbots. JAMA330(8), 702. https://doi.org/10.1001/jama.2023.12500

  25. Horta, H., & Jung, J. (2024). The crisis of peer review: Part of the evolution of science. Higher Education Quarterly, e12511. https://doi.org/10.1111/hequ.12511

  26. Superchi, C., González, J. A., Solà, I., Cobo, E., Hren, D., & Boutron, I. (2019). Tools used to assess the quality of peer review reports: A methodological systematic review. BMC Medical Research Methodology19(1), 48. https://doi.org/10.1186/s12874-019-0688-x

  27. Bergstrom, T., Rieger, O. Y., & Schonfeld, R. C. (2024). The Second Digital Transformation of Scholarly Publishing: Strategic Context and Shared Infrastructure. Ithaka S+R. https://doi.org/10.18665/sr.320210

  28. Alperin, J. P., Schimanski, L. A., La, M., Niles, M. T., & McKiernan, E. C. (2022). The Value of Data and Other Non-traditional Scholarly Outputs in Academic Review, Promotion, and Tenure in Canada and the United States. In A. L. Berez-Kroeker, B. McDonnell, E. Koller, & L. B. Collister (Eds.), The Open Handbook of Linguistic Data Management (pp. 171–182). The MIT Press. https://doi.org/10.7551/mitpress/12200.003.0017

  29. Avissar-Whiting, M., Belliard, F., Bertozzi, S. M., Brand, A., Brown, K., Clément-Stoneham, G., Dawson, S., Dey, G., Ecer, D., Edmunds, S. C., Farley, A., Fischer, T. D., Franko, M., Fraser, J. S., Funk, K., Ganier, C., Harrison, M., Hatch, A., Hazlett, H., … Williams, M. (2024). Recommendations for accelerating open preprint peer review to improve the culture of science. PLOS Biology22(2), e3002502. https://doi.org/10.1371/journal.pbio.3002502

  30. Bucchi, M., & Trench, B. (2021). Rethinking science communication as the social conversation around science. Journal of Science Communication20(03), Y01. https://doi.org/10.22323/2.20030401

  31. Dawson, D. (DeDe), Morales, E., McKiernan, E. C., Schimanski, L. A., Niles, M. T., & Alperin, J. P. (2022). The role of collegiality in academic review, promotion, and tenure. PLOS ONE17(4), e0265506. https://doi.org/10.1371/journal.pone.0265506

Editors

Ludo Waltman
Editor-in-Chief

Ludo Waltman
Handling Editor

Editorial assessment

by Ludo Waltman

DOI: 10.70744/MetaROR.40.1.ea

In this blog post the author argues that problematic incentive structures have led to a rapid increase in the publication of low-quality research articles and that stakeholders need to work together to reform incentive structures. The blog post has been reviewed by three reviewers. Reviewer 3 considers the blog post to be a ‘great piece’, and Reviewer 1 finds it ‘compellingly written and thought provoking’. According to Reviewer 2, the blog post does not offer significant new insights for readers already familiar with the topic. All three reviewers provide recommendations for clarifications. Reviewers 2 and 3 also suggest the blog post could be more critical toward publishers. Reviewers 1 and 3 suggest taking a broader perspective on incentives, for instance by also considering incentives related to teaching and admin or incentives for funders, libraries, and other organizations.

Competing interests: None.

Peer review 1

CJ MacCallum

DOI: 10.70744/MetaROR.40.1.rv1

Summary of the essay

In this essay, the author seeks to explain the ‘firehose’ problem in academic research, namely the rapid growth in the number of articles but also the seemingly concurrent decline in quality. The explanation, he concludes, lies in the ‘superstructure’ of misaligned incentives and feedback loops that primarily drive publisher and researcher behaviour, with the current publish or perish evaluation system at the core. On the publisher side, these include commercial incentives driving both higher acceptance rates in existing journals and the launch of new journals with higher acceptance rates. At the same time, publishers seek to retain reputational currency by maintaining consistency and therefore brand power of scarcer, legacy-prestige journals. The emergence of journal cascades (automatic referrals from one journal to another journal within the same publisher) and the introduction of APCs (especially for special issues) also contribute to commercial incentives driving article growth. On the researcher side, he argues that there is an apparent demand from researchers for more publishing outlets and simultaneous salami slicing by researchers because authors feel they have to distribute relatively more publications among journals that are perceived to be of lower quality (higher acceptance rates) in order to gain equivalent prestige to that of a higher impact paper. The state of peer review also impacts the firehose. The drain of PhD qualified scientists out of academia, compounded by a lack of recognition for peer review, further contributes to the firehose problem because there are insufficient reviewers in the system, especially for legitimate journals. Moreover, what peer review is done is no guarantee of quality (in highly selective journals as well as ‘predatory’). One of his conclusions is that there is not just a crisis in scholarly publishing but in peer review specifically and it is this crisis that will undermine science the most. Add AI into the mix of this publish or perish culture, and he predicts the firehose will burst.

He suggests that the solution lies in researchers taking back power themselves by writing more but ‘publishing’ less. By writing more he means outputs beyond traditional journal publications such as policy briefs, blogs, preprints, data, code and so on, and that these should count as much as peer-reviewed publications. He places special emphasis on the potential role of preprints and on open and more collegiate preprint review acting as a filter upstream of the publishing firehouse. He ends with a call for more collegiality across all stakeholders to align the incentives and thus alleviate the pressure causing the firehose in the first place.

General Comment

I enjoyed reading the essay and think the author does a good job of exposing multiple incentives and competing interests in the system. Although discussion of perverse incentives has been raised in many articles and blog posts, the author specifically focuses on some of the key commercial drivers impacting publishing and the responses of researchers to those drivers. I found the essay compellingly written and thought provoking although it took me a while to work through the various layers of incentives.  In general, I agree with the incentives and drivers he has identified and especially his call for stakeholders to avoid polarization and work together to repair the system. Although I appreciate the need to have a focused argument I did miss a more in-depth discussion about the equally complex layers of incentives for institutions, funders and other organisations (such as Clarivate) that also feed the firehose.

I note that my perspective comes from a position of being deeply embedded in publishing for most of my career. This will have also impacted what I took away from the essay and the focus of my comments below.

Main comments

  1. I especially liked the idea of a ‘superstructure’ of incentives as I think that gives a sense of the size and complexity of the problem. At the same time, by focusing on publisher incentives and researchers’ response to them he has missed out important parts of the superstructure contributing to the firehose, namely the role of institutions and funders in the system. Although this is implicit, I think it would have been worth noting more, in particular:

    • He mentions institutions and the role of tenure and promotion towards the end but not the extent of the immense and immobilizing power this wields across the system (despite initiatives such as DORA and CoARA).

    • Most review panels (researchers) assessing grants for funders are also still using journal publications as a proxy for quality, even if the funder policy states journal name and rank should not be used

    • Many Institutions/Universities still rely on number and venue of publications. Although some notable institutions are moving away from this, the impact factor/journal rank is still largely relied on. This seems especially the case in China and India for example, which has shown a huge growth in research output. Although the author discusses the firehose, it would have been interesting to see a regional breakdown of this.

    • Libraries also often negotiate with publishers based on volume of articles – i.e they want evidence that they are getting more articles as they renegotiate a specific contract (e.g. Transformative agreements), rather than e.g. also considering the quality of service.

    • Institutions are also driven by rankings in a parallel way to researchers being assessed based on journal rank (or impact factor). How University Rankings are calculated is also often opaque (apart from the Leiden rankings) but publications form a core part. This further incentivises institutions to select researchers/faculty based on the number and venue of their publications in order to promote their own position in the rankings (and obtain funding)

  2. The essay is also about power dynamics and where power in the system lies. The implication in the essay is that power lies with the publishers and this can be taken back by researchers. Publishers do have power, especially those in possession of high prestige journals and yet publishers are also subject to the power of other parts of the system, such as funder and institutional evaluation policies. Crucially, other infrastructure organisations, such as Clarivate, that provide indexing services and citation metrics also exert a strong controlling force on the system, for example:

    • Only a subset of journals are ever indexed by Clarivate. And funders and Institutions also use the indexing status of a journal as a proxy of quality. A huge number of journals are thus excluded from the evaluation system (primarily in the arts and humanities but also many scholar-led journals from low and middle income countries and also new journals). This further exacerbates the firehose problem because researchers often target only indexed journals. I’d be interested to see if the firehose problem also exists in journals that are not traditionally indexed (although appreciate this is also likely to be skewed by discipline)

    • Indexers also take on the role of arbiters of journal quality and can choose to delist or list journals accordingly. Listing or delisting has a huge impact on the submission rates to journals that can be worth millions of dollars to a publisher, but it is often unclear how quality is assessed and there seems to be a large variance in who gets listed or not.

    • Clarivate are also paid large fees by publishers to use their products, which creates a potential conflict of interest for the indexer as delisting journals from major publishers could potentially cause a substantial loss of revenue if they withdraw their fees. Also Clarivate relies on publishers to create the journals on which their products are based which may also create a conflict if Clarivate wishes to retain the in-principle support of those publishers.

    • The delisting of elife recently, even though it is an innovator and of established quality, shows the precariousness of journal indexing.

  3. All the stakeholders in the system seem to be essentially ‘following the money’ in one way or another – it’s just that the currency for researchers, institutions, publishers and others varies. Publishers – both commercial and indeed most not-for profit –  follow the requirements of the majority of their ‘customers’  (and that’s what authors, institutions, subscribers etc are in this system) in order to ensure both sustainability and revenue growth. This may be a legacy of the commercialisation of research in the 20th Century but we should not be surprised that growth is a key objective for any company. It is likely that commercial players will continue to play an important role in science and science communication; what needs to be changed are the requirements of the customers.

  4. The root of the problem, as the author notes, is what is valued in the system, which is still largely journal publications. The author’s solution is for researchers to write more – and for value to be placed on this greater range of outputs by all stakeholders. I agree with this sentiment – I am an ardent advocate for Open Science. And yet, I also think the focus on outputs per se and not practice or services is always going to lead to the system being gamed in some way in order to increase the net worth of a specific actor in the system. Preprints and preprint review itself could be subject to such gaming if value is placed on e.g. the preprint server or the preprint-review platform as a proxy of preprint and then researcher quality.

  5. I think the only way to start to change the system is to start placing much more value on both the practices of researchers (as well as outputs) and on the services provided by publishers. Of course saying this is much easier than implementing it.

Other comments

  1. A key argument is that higher acceptance rates actually create a perverse incentive for researchers to submit as many manuscripts as possible because they are more likely to get accepted in journals with higher acceptance rates. I disagree that higher acceptance rates per se are the main incentive for researchers to publish more. More powerful is the fact that those responsible for grants and promotion continue to use quantity of journal articles as a proxy for research quality.

  2. Higher acceptance rates are not necessarily an indicator of low quality or a bad thing if it means that null, negative and inconclusive results are also published

  3. The author states that Journal Impact Factors might have been an effective measure of quality in the past.  I take issue with this because the JIF has, as far as I know, always been driven by relatively few outliers (papers with very high citations) and I don’t know of evidence to show that this wasn’t also true in the past. It also makes the assumption that citations = quality.

  4. The author asks at one point “Why would field specialization need a lower threshold for publication if the merits of peer review are constant? ” I can see a case for lower thresholds, however, when the purpose of peer review is primarily to select for high impact, rather than rigour, of the science conducted. A similar case might be made for multidisciplinary research, where peer reviewers tend to assess an article from their discipline’s perspective and reject it because the part that is relevant to them is not interesting enough… Of course, this all points to the inherent problems with peer review (with which I agree with the author)

  5. The author puts his essay in appropriate context, drawing on a range of sources to support his argument. I particularly like that he tried to find source material that was openly available.

  6. He cites 2 papers by Bjoern Brembs to substantiate the claim that there is potentially poorer review in higher prestige journals than in lower ranked journals. These papers were published in 2013 and 2018 and the conclusions relied, in part, on the fact that higher ranked journals had more retractions. Apart from a potential reporting bias, given the flood of retractions across multiple journals in more recent years, I doubt this correlation now exists?

  7. The author works out submission rates from the published acceptance rates of journals. The author acknowledges this is only approximate and discusses several factors that could inflate or deflate it. I can add a few more variables that could impact the estimate, including: 1) the number of articles a publisher/journal rejects before articles are assigned to any editor (e.g. because of plagiarism, reporting issues or other research integrity issues), 2) the extent to which articles are triaged and rejected by editors before peer review (e.g. because it is out of scope or not sufficiently interesting to peer review); the number of articles rejected after peer review;  and 4) the extent to which authors independently withdraw an article at any stage of the process. When publishers publish acceptance rates, they don’t make it clear what goes into the numerator or the denominator and there are no community standards around this. The author rightly notes this process is too opaque.

Catriona J. MacCallum

As is my practice, I do not wish to remain anonymous. Please also note that I work for a large commercial publisher and am writing this review in an independent capacity such that this review reflects my own opinion, which are not necessarily those of my employer.

Competing interests: None.

Peer review 2

Dan Brockington

DOI: 10.70744/MetaROR.40.1.rv2

This is a well written and clear enough piece that may be helpful for a reader new to the topic. To people familiar with the field there is not so much which is new here. The final recommendation is not well expressed. As currently put it is, I think, wrong. But it is a provocative idea. I comment section by section below.

The first paragraphs repeat well established facts that there are too many papers. Seppelt et al’s contribution is missing here. It also reproduces the disengenuous claim, by a publisher’s employee, that publishers ‘only’ respond to demand. I do not think that is true. They create demand. They encourage authors to write and submit papers, as anyone who has been emailed by MDPI recently can testify. Why repeat something which is so inaccuate?

The section on ‘upstream of the nozzle’ is rather confusing. I think the author is trying to establish if more work is being submitted. But this cannot be deduced from the data presented. No trends are given. Rejection rates will be a poor guide if the same paper is being rejected by several journals. I was also confused by the sources used to track growth in papers – why not just use Dimensions data? The final paragraph again repeats well known facts about the proliferation of outlets and salami slicing. Thus far the article has not introduced new arguments.

Minor points in this section:

  • there are some unsupported claims. Eg ‘This is a practice that is often couched within the seemingly innocuous guise of field specialty journals.’

  • I also do not understand the logic of this rather long sentence: ‘The expansion of journals with higher acceptance rates alters the rational calculus for researchers – all things being equal higher acceptance rates create a perverse incentive to submit as many manuscripts as possible since the underlying probability of acceptance is simply higher than if those same publications were submitted to a journal with a lower acceptance rate, and hence higher prestige.’ I suggest it be rephrased

The section on peer review (Who’s testing the water) is mostly a useful review of the issues. But there are some problems which need addressing. Bizarrely, when discussing whether there enough scientists, it fails to mention Hanson et al’s global study, despite linking to it’s preprint in the opening lines. Instead the author adopts a parochial North American approach and refers only to PhDs coming from the US. It is not adequate to take trends in one country to cannot explain an international publishing scene. These are not the ‘good data’ the author claims. Likewise the value of data on doctorates not going onto a post-doc hinges on how many post-docs there are. That trend is not supplied. This statement ‘Almost everyone getting a doctorate goes into a non-university position after graduation’ may be true, but no supporting data are supplied to justify it. Nor do we know what country, or countries, the author is referring to.

The section ‘A Sip from the Spring’ makes the mistaken claim that researchers hold market power. This is not true. Researchers institutions, their libraries and governments are the main source of publisher income. It is here that the key proposal for improvement is made: researcher can write more and publish less. But if the problem is that there is too much poorly reviewed literature then this cannot be the solution. Removing all peer review, would mean there is even more material to read whose appearance is not slowed up by peer review at all. If peer review is becoming inadequate, evading it entirely is hardly a solution.

This does not mean we should not release pre-prints. The author is right to advocate them, but the author is mistaken to think that this will reduce publishing pressures. The clue is in their name ‘pre-print’. Publication is intended.

Missing from the author’s argument is recognition of the important role that communities of researchers form, and the roles that journals play in providing venues for conversation, disagreement and disucssion. They provide a filter. Yes researchers produce other material than publications as the author states: ‘grant proposals, editorials, policy briefs, blog posts, teaching curricula and lectures, software code and documentation, dataset curation, and labnotes and codebooks.’ I would add email and whatsapp messages to that list. But adding all that to our reading lists will not reduce the volume of things to be read. It must increase it. And it would make it harder to marshall and search all those words.

But the idea is provocative nonetheless. Running through this paper, and occasionally made explicit, is the fact that publishers earn billions from their ‘service’ to academia. They have a strong commercial interest in our publishing more, and in competing with each other to produce a larger share of the market. If writing more, and publishing less, means we need to find ways of directing our thoughts so that they earn less money for publishers, then that could bring real change to the system.

A minor point: the fire hose analogy is fully exploited and rather laboured in this paper. But it is a North American term and image, that does not travel so easily.

Competing interests: None.

Peer review 3

Jonny Coates

DOI: 10.70744/MetaROR.40.1.rv3

This op-ed addresses the issue with the exponential increase in publications and how this is leading to a lower quality of peer review which, in turn, is resulting in more bad science being published. It is a well-written article that tackles a seemingly eternal topic. This piece focussed more on the positives and potential actions which is nice to see as this is a topic that can become stuck in the problems. There are places throughout that would benefit from more clarity and at times there appears to be a bias towards publishers, almost placing blame on researchers. Very simple word changes or headings could immediately resolve any doubt here as I don’t believe this is the intention of the article at all.

Additionally, this article is very focussed on peer review (a positive) but I think that it would benefit from small additions throughout that zoom out from this and place the discussion in the context of the wider issues – for example you cannot change peer review incentives without changing the entire incentives around “service” activities including teaching, admin etc. This occurs to a degree with the discussion on other outputs, including preprints and data. Moreover, when discussing service type activities, there is data that reveals certain demographics deliberately avoid this work. Adding this element into the article would provide a much stronger argument for change (and do some good in the new current political climate).

Overall, I thought this was a great piece when it was first posted online and does exactly what a good op-ed should – provoke thought and discussion. Below are some specific comments, in reading order. I do not believe that there are any substantial or essential changes required, particularly given that this is an op-ed article.

—–

Quote: “Academia is undergoing a rapid transformation characterized by exponential growth of scholarly outputs.”

Comment: There’s an excellent paper providing evidence to this: https://direct.mit.edu/qss/article/5/4/823/124269/The-strain-on-scientific-publishing which would be a very positive addition

Quote: “it’s challenging to keep up with the volume at which research publications are produced”

Comment: Might be nice to add that this was a complaint dating back since almost the beginning of sharing research via print media, just to reinforce that this is a very old point.

Quote: “submissions of poor-quality manuscripts”

Comment: The use of “poor quality” here is unnecessary. Just because a submission is not accepted, it has no reflection on “quality”. As such this does seem to needlessly diminish work rejected by one journal

Quote: “Maybe there are too many poor quality journals too – responding to an underlying demand to publish low quality papers.”

Comment: This misses the flip side – poor quality journals encourage and actively drive low quality & outright fraudulent submissions due to the publisher dominance in the assessment of research and academics.

Quote: “even after accounting for quality,”

Comment: Quality is mentioned here but has yet to be clearly defined. What is “quality”? – how many articles a journal publishes? The “prestige” of a journal? How many people are citing the articles?

Quote: “Researchers can – and do – respond to the availability by slicing up their work (and their data) into minimally publishable units”

Comment: I fully agree that some researchers do exactly this. However, again, this seems to be blaming researchers for creating this firehose problem. I think this point could be reworded to not place so much blame or be substantiated with evidence that this is a widespread practice – my experience has been very mixed in that I’ve worked for people who do this almost to the extreme (and have very high self-citations) and also worked for people who focus on the science and making it as high quality and robust as possible. I agree many respond to the explosion of journals and varied quality in a negative manner but the journals, not researchers are the drivers here.

Quote: “least important aspect of the expected contributions of scholars.”

Comment: I think it may be worth highlighting here that sometimes specific demographics (white males) actively avoid these kinds of service activities – there’s a good study on this providing data in support of this. It adds an extra dimension into the argument for appropriate incentives and the importance & challenges of addressing this.

Quote: “high quality peer review”

Comment: Just another comment on the use of “quality’. This is not defined and I think when discussing these topics it is vital to be clear what one means by “high quality”. For example, a high quality peer review that is designed as quality control would be detecting gross defects and fraud, preventing such work from being published (peer review does not reliably achieve this). In contrast, a high quality peer review designed to help authors improve their work and avoid hyperbole would be very detailed and collegial, not requesting large numbers of additional experiments.

Quote: “conferring public trust in the oversight of science”

Comment: I’m not convinced of this. Conveying peer review as a stamp of approval or QC leads to reduced trust when regular examples emerge with peer review failures – just look at Hydroxychloroquine and how peer review was used to justify that during COVID or the MMR/autism issues that are still on-going even after the work was retracted. I think this should be much more carefully worded, removed or expanded on to provide this perspective – this occurs slightly in the following sentence but it is very important to be clear on this point.

Quote: “Researchers hold an incredible amount of market power in scholarly publishing”

Comment: I like the next few paragraphs but, again, this seems to be blaming researchers when they in fact hold no/little power. I agree that researchers *could* use market pressure but this is entirely unrealistic when their careers depend on publishing X papers in X journal. An argument as to why science feels increasingly non-collaborative perhaps. Funders can have immediate and significant changes. Institutions adopting reward structures, such as teaching for example, would have significant impacts on researcher behaviour. Researchers are adapting to the demands the publication system creates – more journals, greater quantity and reduced quality whilst maintaining control over the assessment – eLife being removed from Wos/Scopus is a prime example of publishers (via their parent companies) preventing innovation or even rather basic improvements.

Quote: “With preprint review, authors participate in a system that views peer review not as a gatekeeping hurdle to overcome to reach publication but as a participatory exercise to improve scholarship.”

Comment: This is framing that I really like; improving scholarship, not quality control.

Quote: “buy”

Comment: typo

Quote: “adoption of preprint review can shift the inaccurate belief that all preprints lack review”

Comment: Is this the right direction for preprints though? If we force all preprints to be reviewed and only value reviewed-preprints, then we effectively dismantle the benefits of preprints and their potential that we’ve been working so hard to build. A recent op-ed by Alice Fleerackers et al provided an excellent argument to this effect. More a question than a suggestion for anything to change.

Quote: “between all of those stakeholders to work together without polarization”

Comment: I disagree here – publishers have repeatedly shown that their only real interest is money. Working with them risks undermining all of the effort (financial, careers, reputation, time) that advocates for change put in. The OA movement should also highlight perfectly why this is such a bad route to go down (again). Publishers grip on preprint servers is a great example – those servers are hard to use as a reader, lack APIs and access to data, are not innovative or interacting with independent services. The community should make the rules and then publishers abide by and within them. Currently the publishers make all of the rules and dominate. Indeed, this is possibly the biggest ommision from this article – the total dominance of publishers across the entire ecosystem. You can’t talk about change without highlighting that the publishers don’t just own journals but the reference managers, the assessment systems, the databases etc. I may be an outlier on this point but for all of the people I interact with (often those at the bottom of the ladder) this is a strong feeling. Again, not a suggestion for anything to change and indeed the point of an op-ed is to stimulate thought and discussion so dissent is positive.

Note that these annotations were made in hypothes.is and are available here, linked in-text for ease – comments are duplicated in this review.

Competing interests: None.

Author response

DOI: 10.70744/MetaROR.40.1.ar

A few months back, Upstream editor Martin Fenner suggested that I submit my Upstream blog post titled, Drinking from the Firehose? Write More and Publish Less, for peer-review as a sort of experiment for Upstream through MetaROR. MetaROR, a relative newcomer to the scholarly communication community, provides the review and curate steps in the “publish-review-curate” model for meta-research.

While I do not consider myself a meta-researcher (scholars who conduct research on research) many of my positions on science policy have implications on the field (especially, those on transparency, openness, and reproducibility). I think the main call in my blog post for reform in scholarly communication – namely, to stop publishing in traditional journals as much and start rewarding a broader swath of scholarly activities like data sharing – is particularly appealing to meta-researchers who rely on non-publication outputs for their work. So, I submitted. The article was openly reviewed, and MetaROR provided an editorial assessment. Here, I reply to the reviewers and contribute to the curation of the original post.

The reviews are very high-quality – in fact, they are some of the most well-reasoned reviews I’ve received in the 20 years I’ve been a scholar. If MetaROR represents the future of peer-review through the publish-review-curate model, scholarly communication is about to get a whole lot better. You can read the open reviews of my blog post here. The revised version of the editorial is here.

Like traditional peer-review, each individual reviewer provided their feedback independently of the others and the handling editor did not curate the reviews. I prefer when editors do such curation since it helps to organize the response in a way that reduces redundancy. This is one of the main benefits of the group-based peer review systems – such as PREreview’s Live Review. Also, there was no easy way (or at least not an obvious one) to export the reviews in plaintext from MetaROR so I could respond point-by-point in software of my choice. Below is an attempt to organize my response roughly around the major criticisms and suggestions in the review. Because this was an opinion piece and not research, I’m not going to respond to every point anyway – nearly all of which I would accept and revise accordingly had this been a research article.

Too Easy on the Publishers, Too Hard on Researchers

All three reviewers expressed some dismay over how light my criticism of the publishers was in my blog piece. I do not disagree. The reviewers rightfully point out that the publishers play outsized role in the inequity created in the scholarly communication space. However, I am choosing not to revise here much as the essay was already too long – it would have taken a tome to articulate my criticism of the publishers. That’s out of scope. However, I revised the first paragraph in the conclusion to state:

The publishers are incentivized to avoid any other form of reform – this is the rational option that publishers choose in response to the apparent demand from researchers – as Ciavarella rightly pointed out.

Two of the reviewers also thought I was too harsh on researchers. I don’t think that I was overly harsh. All three agree with me that researchers have some market role here but disagree with the extent to which they can exert influence. One reviewer claims researchers have no market power (to which I respectfully disagree). I’ve clarified in the paper that: ‘the power any individual researcher has here is small. Collective action is needed.’ I reject that researchers are blameless for the status quo – complacency empowers the publishers. Unfortunately, it’s also baked into the superstructure of the reward system that is perpetuated by publisher-controlled market forces. I also added the following sentiment along these lines when discussing market-power of researchers:

It’s free to share and read research without the need for costly, anticompetitive gatekeeping. Leveraging that freedom is an untapped source of market power.

Focus More on Institutions and Funders and Communities

Two of the three reviewers thought I needed to draw more attention to the roles, demands, and influence that academic institutions, publisher consortia, libraries, indexing services, scholarly societies, and grassroots research organizations have in this ecosystem. I agree with all these points – and had Clarivate’s irresponsible delisting of eLife in the Web of Science happened before I wrote the original piece, I would have highlighted that as one reviewer suggested.

No New Arguments or Analysis

The reviewers felt that, while well-articulated, the arguments I was espousing are not novel. First, I think it is worthy to renew the idea that we should be more selective in choosing what to publish in journals. Focusing on quality over quantity and valuing activities beyond journal publications should be repeated often until it’s common practice.

One comment called for more data and analysis, and another wanted some additional research cited. I think that’s a great idea and I hope the reviewers can do that work or perhaps the open review will inspire others to do so.

In response to the criticism that preprints themselves both presuppose an eventual traditional publication and that they could be gamed, I revised that section accordingly:

There is risk of gaming preprints and preprint review just as there is in traditional publishing, such as by placing value on a paper for where it appears or how it was reviewed without considering its quality or contribution to science.

One reviewer misunderstood my point about preprints altogether:

Removing all peer review, would mean there is even more material to read whose appearance is not slowed up by peer review at all. If peer review is becoming inadequate, evading it entirely is hardly a solution. This does not mean we should not release pre-prints. The author is right to advocate them, but the author is mistaken to think that this will reduce publishing pressures. The clue is in their name ‘pre-print’. Publication is intended.

I am absolutely not arguing for tossing out peer review. I strongly believe peer review is valuable but currently broken. Moreover, I reject that peer review needs to happen behind the gatekeeping of publishers. I revised to clarify here and added a footnote based on this reviewer’s latter observation.

Peer-review remains a critical check for pollutants in the waters – but the prevailing model needs significant reform. The traditional opaque, uncompensated system has eroded the quality, transparency, timeliness, and appropriateness of peer review due to competing priorities and a lack of appropriately aligned incentive structures. Novel models of peer review including, publish-review-curate and preprint review, and compensated review – ideally all done transparently and with conflicts of interest declared out in the open. At the same time, not all manuscripts need review to have value and most preprints with value (even those with reviews) should not be published in journals.

New footnote: The term ‘preprint’ is evolving – what was once a moniker for non-peer reviewed manuscript intended to eventually become reviewed and published (or more likely, rejected) now scopes-in other forms including publish-curate-review and manuscripts with preprint reviews. A new labeling and metadata system is desperately needed to highlight the state of review of a particular manuscript in a record of versions. Version control systems and badging are ubiquitous in the open-source software community and could be easily adopted here.

Volume is Volume is Volume

Probably the most important critique among the set of reviews points out an apparent recursion in the logic of the thesis that I need to clarify: you can’t solve the firehose problem by writing more, as that just adds more volume to the flow. My revision to the conclusion clarifies my intent: what I’m proposing is to stop sending so many papers to journals for publication and to choose preprints more often for reading, reviewing, and writing. At the same time the system should, maintain or increase non-publication scholarly outputs and reward those too.

“Write-More” here is a placeholder for all the non-publication writing scholars do and should get credit for from their institutions and fields. Again, I happen to focus on writing because that’s what I care about in this editorial and it would take volumes to pontificate on all the other services and activities that happen within the academy that are not properly rewarded.

Summary

Having my blog post peer-reviewed through MetaROR was a positive experience and I recommend the service. However, my post was still just an editorial – my opinions and thoughts – not research. Had this been a research article, however, the reviews as presented would have been a very good roadmap to improving the paper. For MetaROR, I have two suggestions: 1) the editorial assessment could be improved by organizing the key points and 2) create a way to have all reviews downloadable in plaintext for ease of importing into an editor.

Acknowledgments

Special thanks are owed to the reviewers, Catriona MacCallumDan Brockington, and Jonny Coates, the MetaROR handling editor Ludo Waltman, and to Upstream Editor and Front Matter founder Martin Fenner for the crazy idea to peer-review a blog post.

Disclosure

The opinions expressed here are my own and may not represent those of my employer, my associates, or the reviewers. I have no conflicts of interest to disclose.

This author response was previously published on Upstream.

Leave a comment