Published at MetaROR

April 23, 2026

Table of contents

Cite this article as:

Dianovics, D., Varga, M. A., Bognar, M., & Aczel, B. (2025). Acceptance and publication lags in scholarly journals between 2016 and 2025. Lifecycle Journal, 1. https://doi.org/10.71240/lcyc.084124

Acceptance and publication lags in scholarly journals between 2016 and 2025

Dominik Dianovics1,2, Marton A Varga2,1, Miklos Bognar2,1 , Balazs Aczel2

1. Doctoral School of Psychology, ELTE Eotvos Lorand University, Budapest, Hungary
2. Institute of Psychology, ELTE Eotvos Lorand University, Budapest, Hungary

Originally published on December 12, 2025 at: 

Abstract

The peer-review system remains central to scientific communication, yet increasing submission volumes, shifts toward open access publishing, and disruptions such as the Covid-19 pandemic have raised concerns about its efficiency and accuracy. This study aims to systematically map trends in acceptance and publication delays across more than six million PubMed-indexed articles published between 2016 and 2025. Using journal metadata and peer-review information, we will examine how journal characteristics, reviewer practices, and field-specific factors shape these delays. To achieve this, we will employ a combination of descriptive statistics, generalized additive models and machine learning algorithms to capture trends, seasonality, and journal-level effects. The project will provide a comprehensive description of how publication timelines have evolved and identify the key determinants driving variations across journals and disciplines.

Introduction

Current metascientific concerns

Recent crises in the social sciences, most notably the replication and reproducibility crisis, highlighted the need to reevaluate current methodological practices and the importance of a more introspective science (Fanelli, 2018; Frias‐Navarro et al., 2020). In addition to striving for more reliable and verifiable scientific content production, the conventions of scientific reporting have also been called into question, including concerns about a ‘crisis’ in the peer review and publication process (Horta & Jung, 2024; Tropini et al., 2023). The source of the problem, in our view, stems from the inability of the reviewing system to (1) keep up with the current volume of submissions; (2) detect all errors or transgressions; and (3) perform these tasks at a sufficiently fast pace (Drozdz & Ladomery, 2024; Godlee et al., 1998; Publons, 2018; Schroter et al., 2008).

Peer review system and publication lags

Pre-publication, single-anonymized peer review remains the standard method for evaluating empirical research across most scientific disciplines. Proponents contend that it represents the most practical approach currently available for identifying key findings and detecting errors, while reviewer anonymity is intended to minimize potential bias or conflicts between reviewers and authors (Kassirer, 1994). Nonetheless, the method has also faced substantial criticism. Because most journals do not compensate reviewers for their time, researchers’ existing workloads often take precedence over reviewing duties (Huisman & Smits, 2017). As a result, manuscripts often spend extended periods awaiting review. The waiting period does not rest solely on the reviewers, however, as Hadavand and colleagues (2024) found that journals and authors have the manuscript ‘in-hand’ for comparable durations. Beyond this inefficiency lies the issue of accuracy. Article retractions, errata, and expressions of concern are just a few examples of cases in which issues have been found and addressed. Articles published after 2005 have an average retraction time of 790 days based on the Retraction Watch database (Retraction Watch Data, 2025), showing that the length of the retracting procedure can take years and, due to the perceived negative connotations that come with a retraction both for the authors and the journal, the procedure is also hard to initiate and complete (Casadevall et al., 2014; Williams & Wager, 2013). Retractions and corrections, however, represent merely a fraction of problems, as these were both noticed and deemed significant enough to warrant formal action.

Speed of scientific publishing

In the typical publication process, researchers submit their manuscripts to a journal, where an editor decides whether to send the submission for peer review or issue a desk rejection. If the manuscript is sent out, the editor invites two to three experts, ordinarily in the relevant field, to evaluate it in detail. Reviewers may recommend rejection or acceptance, with or without requests for minor or major revisions. This process may repeat over several review cycles, culminating in the manuscript’s acceptance. The duration of this process is referred to as the acceptance lag (also known as editorial delay or acceptance delay, Amat, 2008; Luwel et al., 2020; Majumdar, 2023). The publication lag covers the period between acceptance and the article’s appearance online or in print (also called technical delay or publication delay, Ding & Du, 2023; Luwel et al., 2020). At this stage, most responsibilities shift to the journal, which handles tasks such as copyediting, DOI assignment, and integration into the online platform. Journals may have different ways to calculate these lags, for example, by only reporting first revision times or in-principle acceptances, making the comparison of the journal-reported lags difficult. This is why we work with this simplified definition, which allows for the use of large databases. For simplicity, when discussing the entire process, we will use the term publication speed.

Minimizing lags is advantageous for several reasons. In the event of a rejection, it is in the authors’ interest to get prompt notification so they can make the required revisions and seek another outlet for publication. Upon acceptance, waiting increases the time it takes for potentially useful scientific results to be made public, thereby slowing scientific progress in the short and long term. Solomon and Björk (2012) found that the expected time to publication is highly relevant to authors in journal selection, ranking third after topic fit and journal quality. This is unsurprising, given that the publication process can be very frustrating, especially considering the possibility that authors need to resubmit their work to different journals (Björk & Solomon, 2013; Powell, 2016). It is worthwhile for the scientific community to consider the mental well-being of researchers. However, in order to reevaluate the current system, we first need to understand how it works, looking at temporal changes and its determinants.

Publishing has almost exclusively moved to online platforms, which eliminates the difficulties associated with print formats, such as printing time, organizing articles for publication, and the sequential publication of already accepted but not yet published backlogs, but this does not mean that online publishing is without its own peculiarities. While software innovations can shorten publication lags, the peer-review process has largely remained unchanged (Horta & Jung, 2024).

Potential contributors to lags

For the purposes of our study, journals can be categorized as follows: by metrics, by disciplinary or thematic specialization, and by open-access status. Journal metrics serve as bibliometric indicators of quality and impact, providing information about a journal’s relative standing within its subject area, its citation performance, and other relevant characteristics. One commonly used metric is the quartile ranking, which divides journals within a given discipline into four equally sized quality tiers, with Q1 representing the highest. Another widely used indicator is the impact factor, calculated as the average number of citations received per year by articles published in the preceding two years (Clarivate, n.d.). Finally, the h-index reflects the number h of a journal’s publications that have each received at least h citations (Hirsch, 2005).

The evaluation of journals has increasingly become intertwined with questions of publishing profitability. The rise of non-diamond open-access models raises a question of incentives, as the Mertonian norm of communality, i.e., free availability of scientific findings to every scientist or even the general population, is attached to an open-access fee (Lerdau, 2022). Another recent phenomenon is the rise and persistence of predatory outlets that exploit the flaws of the current scientific system for monetary gain (Berger & Cirasella, 2015). While open access has broadened accessibility, it has also introduced financial incentives that may prioritise speed over rigor. Journals and publishers do not need to actively encourage expedited peer review if the interests of the authors, reviewers, editors, and owners align. This raises concerns about whether the push for rapid dissemination is necessarily aligned with quality.

At the same time, disciplinary differences complicate these dynamics: some fields operate with longer peer review and revision cycles, while others may sustain faster publication rhythms. These variations do not necessarily stem from the difficulty of the written article or anything inherent in the discipline.

Previous findings on acceptance and publication lags

Although several articles have been published on this topic, most have focused on a single discipline or research area and, as a result, have been unable to analyze a large number of articles. Specifically, for pharmaceutical papers, the average acceptance lag was 105 days between 2009 and 2018 (Mendes et al., 2021). Regarding 18 medicine and related journals, Sebo and colleagues (2019) found the median acceptance lag in 2016 to be 123 days, with a median publication lag of 68 days. In the Journal of Communication, the mean acceptance lag was 246 days, while publication from acceptance took, on average, 33 days for articles published from 2018 to 2022 (Tóth et al., 2025). The most drastic example comes from economics, where the top five journals within the field have acceptance times of at least a year (Hadavand et al., 2024).

Luwel et al. (2020) conducted the most representative study in our literature review, examining more than 3,000,000 articles from the Elsevier database, focusing on lags between 2000 and 2016. They concluded that publication lags have gradually decreased over the years for online publication, but the picture is complicated by differences between disciplines and the emergence of open-access journals. Receival-to-acceptance times, however, had been stagnant at a median of 120 days.

Factors that influence publication lags, such as technological developments and organizational operations of publishers or journals, cumulatively take less time than to review and accept, where the process can be considerably longer for a variety of reasons, including the hiring of reviewers or delays in reviewer responses, the need for substantial revisions, or multiple rounds of requests. It is worth noting that one of the greatest challenges to the peer-review process is that reviewers generally provide their services without compensation, particularly in most social science fields (Alberts et al., 2008). In addition, it requires a great deal of work and concentration from people who are often already overloaded (Diederich, 2013). Consequently, when researchers submit their manuscripts, they cannot accurately estimate when their work will be publicly available.

The last ten years

In December 2019, the COVID–19 (referred to as Covid hereafter) outbreak emerged and developed into a global pandemic by 2020, affecting not only public health but also scientific activities (Nane et al., 2023). Remote work altered the workload of researchers and teachers, and the number of articles on Covid grew exponentially in a few months, with some examples of these articles being produced using questionable research practices (QRPs, Bramstedt, 2020). According to an analysis published in Nature, more than 100,000 articles have been published on the topic of Covid, with an increasing number of academics trying to link their field of research to the pandemic over time (Else, 2020). The authors also observed that Covid–19 articles were accepted more quickly than those written on other topics. While the overall number of publications within medical journals grew in 2020, there were 18% fewer non-Covid articles, a result of researchers concentrating their resources on the study of Covid (Raynaud et al., 2021). However, this is contrasted by the increase in the overall amount of manuscript submissions in the same period (Else, 2020). Additionally, the beginning of the outbreak was characterized by expedited acceptance and publication times that were no longer than a few days as a result of fast-track review and coordinated submission-review-publication cycles (Kun, 2020). As Sikdar and colleagues (2024) point out, a combination of topic constriction, limited working conditions, and focused incentives was needed for this unique period in scientific publishing. Overall, these findings suggest that it is worth looking at pre- and post-Covid trends in changes in acceptance and publication lags. Other potential factors include technological advances that may have affected the publication lags, and the open science movement in general. Lastly, the rise in open-access and mega-journals also warrants further analysis.

Mega-journals, as a unique publishing model, began with the establishment of PLOS ONE to leverage the untapped potential of publishing scientifically sound papers that do not meet the criteria for novel and important findings in the eyes of the “big” journals (Björk, 2018). Mega-journals provide a new type of academic publishing opportunity, characterized by openly accessible articles, charging article processing fees (APCs), and judging manuscripts solely on scientific soundness rather than topic, relevance, or other factors (Björk, 2018). This results in a large number of publications in a given journal, as any article that is “appropriate” from a scientific methodology perspective can be published. This volume requires either rapid manuscript evaluation or access to a large number of editors and reviewers, which, in principle, can lead to shorter acceptance lags. There is evidence that, in the early period, mega-journals were in fact faster than their regular counterparts, but the increased volume of received manuscripts might have negated this effect (Björk, 2021). On the other hand, even if mega-journals do not review and issue manuscripts faster than regular journals, a higher acceptance rate that these journals are known for means that authors are less likely to need to find another forum to publish their results, shortening the overall publication period as experienced by researchers (Björk, 2015).

Current study

Given the aforementioned reasons, it is an important question to understand how acceptance and publication lags have evolved in recent years and what factors shape them. Although prior studies have typically focused on single disciplines or narrower datasets, little is known about broader, cross-disciplinary patterns, especially in light of transformative events such as the Covid–19 pandemic, the spread of open access publishing, and the rise of mega-journals. By examining over six million articles published between 2016 and 2025 in the PubMed database, the present study aims to provide an overview of trends in acceptance and publication lags. The goal is to find overarching trends in the literature, with disciplines and journals as the units of categories. Further investigation into acceptance is possible due to the inclusion of the number of review rounds, reviewers, and revision dates.

Methods

We used the PubMed Database, hosted by the National Center for Biotechnology Information, to obtain the data necessary for the analysis (National Center for Biotechnology Information, n.d.). We used metadata from this database via their Entrez Programming Utilities (E-utilities) API (application programming interface; Sayers, 2018). The data were downloaded in May 2025 (will be redownloaded in 2026 to include all of 2025 in the final article). We only downloaded articles published between 2015 and 2025. The articles’ metadata was obtained in XML files, and since we did not need the whole metadata, only the information associated with them, we had public access to all articles. We first stored the XML files in JSON and then in TSV files. The dataset thus consists of more than 9,000,000 article metadata records before filtering. In addition, we joined data from the Directory of Open Access Journals (DOAJ), Web of Science (WoS), Scimago Journal Ranking (SJR; hereinafter: Scimago), the Norwegian Publication Indicator (NPI), and peer-review metadata from Clarivate. Data were acquired using Python, R, and Bash scripts and stored on the IT Directorate HPC cluster at Eötvös Loránd University, Budapest, Hungary. Due to the size of the data, only the generating code is shared.

Databases

First, we downloaded the Scimago Journal & Country Rank ‘Journal Rankings’ database for each year (2016–2024, SCImago, n.d.). The Journal Rankings include metrics for each journal each year, such as H-index, best quartile, total number of references and citations in each period, total number of documents for a journal, average citations per article, and the journal ranking prepared by Scimago. We analyzed the following metrics: SJR (Scimago’s own ranking), h-index, ranking, and best quartile for a given year.

The second database was Web of Science, which we used to group journals into scientific disciplines (Clarivate, n.d.). WoS uses the All Science Journal Classification (ASJC) codes discipline classification structure, which divides scientific works into 5 major disciplines (Life Sciences, Social Sciences and Humanities, Physical Sciences, Health Sciences, and Multidisciplinary), and within those, fields of science (Psychology), and finally subcategories (Cognitive Psychology). Each group has a code, for example, multidisciplinary is 1,000. For clarity, we focused only on the five overarching fields.

The Norwegian Publication Indicator was used to group journals into different quality levels. Managed by the Norwegian National Board of Scholarly Publishing (NPU) under the country’s ministry of education, it is a classification system composed of scientific publication channels like journals, book publishers, and certain conference proceedings (Schneider, 2009). Its goal is to promote high-quality research and to aid national funding decisions by classifying these channels into three rankings: 0, 1, and 2. Level 1 classification journals are pre-publication peer-reviewed, contain academic articles, are not institution-specific, have an ISSN, an editorial board, a website, etc. For further information, see the Norwegian Directorate for Higher Education and Skill website (https://hkdir.no/en).. Most journals are Level 1. To have a Level 2 rating, a journal must be prestigious and rank in the top 20% of academic output within a given discipline. They are almost exclusively specialized, internationally known entities focused on novelty and importance, not just scientific soundness, so mega-journals usually don’t qualify. If a publication channel doesn’t meet the two Levels above, it will receive a Level 0 rating. If there is no consensus on a journal, it might receive an X rating or level, indicating uncertainty about its quality and publication practices, such as hastened peer review or predatory financial practices. So-called predatory journals, which do not meet scientific standards and instead prey on unaware or morally flexible scientists, would fit into either Level 0 or X. As with Scimago, NPI assigns levels yearly. They also collect information on journals’ disciplines, with agreement between WoS and NPI at only 72%, so we chose WoS for disciplinary separation.

We used the Directory of Open Access Journals database to determine open publication. This database includes information such as article processing charges, evaluation methods, open access compliance, and DOAJ’s own seal certifying compliance with open access criteria. DOAJ includes different types of open access categories, but hybrid open access journals that publish both closed and open access articles are not included, ensuring that the articles flagged in open access journals are themselves open access.

Retraction Watch data has also been incorporated (Retraction Watch Data, 2025), allowing us to check the accuracy of PubMed ‘retracted’ tags and to analyse retracted articles separately.

We are currently awaiting peer review metadata from Clarivate’s Peer Review Recognition Service, which will include the review acceptance date (the date the reviewer accepted the invitation to review), the review date, and the review round number. These data will be joined via the article DOI. The dataset will contain 10 million articles, of which we don’t know the overlap with our data beforehand.

Procedure

During the initial screening, we kept only articles that included at least the submission date (receival date/date of receival) or the acceptance date. Then, those with the publication type “Journal Article” were kept. Furthermore, we filtered out articles without an ISSN link, as we used this identifier to attach other journal-level information. Since PubMed does not require journals to provide submission and acceptance dates, and we could not find a precise definition of lag for each journal, we only looked at aggregate data. The publicly available data can be found on PubMed, and the analysis code can be found on the study’s registration page.

We ensured data integrity by implementing multiple filters. We only kept articles where the submission date did not match or was succeeding the acceptance and/or publication date, and where acceptance did not occur before or on the same day as publication. Although some of these may reflect actual dates, we assume that omitting these articles did not result in losing more correct data than incorrect data. We then calculated the acceptance lag (acceptance date—submission date) and the publication lag (publication date—acceptance date). When there were multiple instances of an article with the same title, we kept only one. Finally, we filtered out research published before 2016 from the data because it contained disproportionately few articles per year (<10% compared to later years). We limited the receival date to 2013.01.01. The next step was to attach other databases. We merged all datasets by ISSN or DOI and kept both journal instances when two ISSNs (print and electronic) were provided, to increase the likelihood of a successful combination. We did this for every database.

We linked the four databases mentioned above to the PubMed data. Since the databases do not overlap perfectly, the different analyses use different sample sizes, which may include different articles depending on which database was needed to create the analysis. For this reason, each analysis has a dedicated sample size. Using (Björk’s (2018) criteria, we classified 21 scientific journals as mega-journals. Some journals labeled themselves as mega-journals, while others were placed in this group by us because they met the criteria. The list of journals is available in the Supplement. Table 1 shows which variables come from which database. PubMed created its own label for retracted articles, which we used to create the analysis.

It is important to note that there is article-level and journal-level information, which we highlight separately in the results. Data integrity and handling checks were done with the validate package (Van Der Loo & De Jonge, 2021).

Since no individuals participated in the research, we did not seek ethical approval. Code for data retrieval and wrangling is available for all datasets in their dedicated folder in the study’s OSF page.

Table 1. Variable sources

Source

Variables

PubMed

Date of receival, date of acceptance, date of publication, keywords, retracted articles

Scimago

H-index, rank, quartile

Directory of Open Access Journals

Open access, APC

Web of Science

Discipline,

Norwegian Publication Indicator

NPI level

Retraction watch

Retraction date, retraction reason

Reviewer Recognition Service (pending)

Review acceptance date, review date, number of reviewers

Analysis

We will perform a statistical analysis on the 6,642,176 articles collected in this way using multiple open-source software, e.g. R 4.3.3. We identified articles about Covid as follows: if the title or keywords contained any known name of the disease (COVID, COVID–19, SARS-CoV–2, coronavirus, etc.), we considered it a Covid article (For the exact list, see the Supplementary Information). It is important to note that we do not include articles published only in print format, as these are negligible compared to those published online or in both formats, given current publication trends.

Descriptive analyses will be used to compare high-level changes in lags in PubMed articles and along disciplinary and journal metric lines.

Longitudinal trends and associated factors will be analyzed with Generalized Additive Models (GAMs) using the mgcv package. Model 1 will only include years, months, and weekdays as independent variables that regress on acceptance or publication lags separately. Smoothing parameter selection will be done with REML. Each succeeding model will increase in complexity along additional parameters, the final model aims for the highest prediction accuracy with the lowest variable count by inducing additional penalties. If there is autocorrelation, measured by ACF and PACF, we will include an autocorrelation term. AIC, MAE, RMSE, and adj R2 will be reported.

We will also fit a CatBoost regression model to predict article acceptance lags (in days) based on manuscript- and journal-level features using the catboost R package. The hyperparameters will be manually tuned to achieve best performance under a reasonable timeframe (< 1 day on HPC). Train and test data will be a random 80/20 split. MAE, RMSE, R2 performance metrics will be shared with SHAP values for feature importance. We will explore other ML / DL approaches as well, such as neural networks designed for tabular data.

Supporting Information:

Supporting information files can be found here: https://osf.io/6a8kg/files/k3fre

References

Alberts, B., Hanson, B., & Kelner, K. L. (2008). Reviewing Peer Review. Science, 321(5885), 15–15. https://doi.org/10.1126/science.1162115

Amat, C. B. (2008). Editorial and publication delay of papers submitted to 14 selected Food Research journals. Influence of online posting. Scientometrics, 74(3), 379–389. https://doi.org/10.1007/s11192–007–1823–8

Berger, M., & Cirasella, J. (2015). Beyond Beall’s List: Better understanding predatory publishers. College & Research Libraries News, 76(3), 132–135. https://doi.org/10.5860/crln.76.3.9277

Björk, B.-C. (2015). Have the “mega-journals” reached the limits to growth? PeerJ, 3, e981. https://doi.org/10.7717/peerj.981

Björk, B.-C. (2018). Evolution of the scholarly mega-journal, 2006–2017. PeerJ, 6, e4357. https://doi.org/10.7717/peerj.4357

Björk, B.-C. (2021). Publishing speed and acceptance rates of open access megajournals. Online Information Review, 45(2), 270–277. https://doi.org/10.1108/OIR–04–2018–0151

Björk, B.-C., & Solomon, D. (2013). The publishing delay in scholarly peer-reviewed journals. Journal of Informetrics, 7(4), 914–923. https://doi.org/10.1016/j.joi.2013.09.001

Bramstedt, K. A. (2020). The carnage of substandard research during the COVID–19 pandemic: A call for quality. Journal of Medical Ethics, 46(12), 803–807. https://doi.org/10.1136/medethics–2020–106494

Casadevall, A., Steen, R. G., & Fang, F. C. (2014). Sources of error in the retracted scientific literature. The FASEB Journal, 28(9), 3847–3855. https://doi.org/10.1096/fj.14–256735

Clarivate. (n.d.). The Clarivate Impact Factor. Retrieved October 20, 2025, from https://clarivate.com/academia-government/essays/impact-factor/

Diederich, F. (2013). Are We Refereeing Ourselves to Death? The Peer‐Review System at Its Limit. Angewandte Chemie International Edition, 52(52), 13828–13829. https://doi.org/10.1002/anie.201308804

Ding, J., & Du, D. (2023). A study of the correlation between publication delays and measurement indicators of journal articles in the social network environment—Based on online data in PLOS. Scientometrics, 128(3), 1711–1743. https://doi.org/10.1007/s11192–023–04640–6

Drozdz, J. A., & Ladomery, M. R. (2024). The Peer Review Process: Past, Present, and Future. British Journal of Biomedical Science, 81, 12054. https://doi.org/10.3389/bjbs.2024.12054

Else, H. (2020). How a torrent of COVID science changed research publishing—In seven charts. Nature, 588(7839), 553–553. https://doi.org/10.1038/d41586–020–03564-y

Fanelli, D. (2018). Is science really facing a reproducibility crisis, and do we need it to? Proceedings of the National Academy of Sciences, 115(11), 2628–2631. https://doi.org/10.1073/pnas.1708272114

Frias‐Navarro, D., Pascual‐Llobell, J., Pascual‐Soler, M., Perezgonzalez, J., & Berrios‐Riquelme, J. (2020). Replication crisis or an opportunity to improve scientific production? European Journal of Education, 55(4), 618–631. https://doi.org/10.1111/ejed.12417

Godlee, F., Gale, C. R., & Martyn, C. N. (1998). Effect on the Quality of Peer Review of Blinding Reviewers and Asking Them to Sign Their Reports: A Randomized Controlled Trial. JAMA, 280(3), 237. https://doi.org/10.1001/jama.280.3.237

Hadavand, A., Hamermesh, D. S., & Wilson, W. W. (2024). Publishing Economics: How Slow? Why Slow? Is Slow Productive? How to Fix Slow? Journal of Economic Literature, 62(1), 269–293. https://doi.org/10.1257/jel.20221653Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102(46), 16569–16572. https://doi.org/10.1073/pnas.0507655102

Horta, H., & Jung, J. (2024). The crisis of peer review: Part of the evolution of science. Higher Education Quarterly, 78(4), e12511. https://doi.org/10.1111/hequ.12511

Huisman, J., & Smits, J. (2017). Duration and quality of the peer review process: The author’s perspective. Scientometrics, 113(1), 633–650. https://doi.org/10.1007/s11192–017–2310–5

Kassirer, J. P. (1994). Peer Review: Crude and Understudied, but Indispensable. JAMA, 272(2), 96. https://doi.org/10.1001/jama.1994.03520020022005

Kun, Á. (2020). Time to Acceptance of 3 Days for Papers About COVID–19. Publications, 8(2), 30. https://doi.org/10.3390/publications8020030

Lerdau, M. (2022). The challenge of open access incentives. Science, 378(6617), 256–256. https://doi.org/10.1126/science.ade7288

Luwel, M., Van Eck, N. J., & Van Leeuwen, T. (2020). Characteristics of Publication Delays Over the Period 2000–2016. In C. Daraio & W. Glänzel (Eds.), Evaluative Informetrics: The Art of Metrics-Based Research Assessment (pp. 89–114). Springer International Publishing. https://doi.org/10.1007/978–3–030–47665–6_4

Majumdar, S. (2023). A nine-year (2012–2020) study of publication delay in an open access Library and Information Science journal. Malaysian Journal of Library and Information Science, 28(1), 35–50. https://doi.org/10.22452/mjlis.vol28no1.3

Mendes, A.M., Tonin, F. S., Mainka, F. F., Pontarolo, R., & Fernandez-Llimos, F. (2021). Publication speed in pharmacy practice journals: A comparative analysis. PLOS ONE, 16(6), e0253713. https://doi.org/10.1371/journal.pone.0253713

Nane, G. F., Robinson-Garcia, N., Van Schalkwyk, F., & Torres-Salinas, D. (2023). COVID–19 and the scientific publishing system: Growth, open access and scientific fields. Scientometrics, 128(1), 345–362. https://doi.org/10.1007/s11192–022–04536-x

National Center for Biotechnology Information. (n.d.). PubMed [Dataset]. U.S. National Library of Medicine. https://www.ncbi.nlm.nih.gov/pubmed/

Powell, K. (2016). Does it take too long to publish research? Nature, 530(7589), 148–151. https://doi.org/10.1038/530148a

Publons. (2018). Publons’ Global State Of Peer Review 2018 (0 ed.). Publons. https://doi.org/10.14322/publons.GSPR2018

Raynaud, M., Goutaudier, V., Louis, K., Al-Awadhi, S., Dubourg, Q., Truchot, A., Brousse, R., Saleh, N., Giarraputo, A., Debiais, C., Demir, Z., Certain, A., Tacafred, F., Cortes-Garcia, E., Yanes, S., Dagobert, J., Naser, S., Robin, B., Bailly, É., … Loupy, A. (2021). Impact of the COVID–19 pandemic on publication dynamics and non-COVID–19 research production. BMC Medical Research Methodology, 21(1), 255. https://doi.org/10.1186/s12874–021–01404–9

Retraction Watch Data. (2025). [Dataset]. https://gitlab.com/crossref/retraction-watch-data

Sayers, E. (2018). E-utilities Quick Start. In Entrez Programming Utilities Help [Internet]. National Center for Biotechnology Information (US). https://www.ncbi.nlm.nih.gov/books/NBK25500/

Schneider, J. W. (2009). An Outline of the Bibliometric Indicator Used for Performance-Based Funding of Research Institutions in Norway. European Political Science, 8(3), 364–378. https://doi.org/10.1057/eps.2009.19

Schroter, S., Black, N., Evans, S., Godlee, F., Osorio, L., & Smith, R. (2008). What errors do peer reviewers detect, and does training improve their ability to detect them? Journal of the Royal Society of Medicine, 101(10), 507–514. https://doi.org/10.1258/jrsm.2008.080062

SCImago. (n.d.). SJR—SCImago Journal & Country Rank. SCImago. http://www.scimagojr.com/

Sebo, P., Fournier, J. P., Ragot, C., Gorioux, P.-H., Herrmann, F. R., & Maisonneuve, H. (2019). Factors associated with publication speed in general medical journals: A retrospective study of bibliometric data. Scientometrics, 119(2), 1037–1058. https://doi.org/10.1007/s11192–019–03061–8

Sikdar, S., Venturini, S., Charpignon, M.-L., Kumar, S., Rinaldi, F., Tudisco, F., Fortunato, S., & Majumder, M. S. (2024). What we should learn from pandemic publishing. Nature Human Behaviour, 8(9), 1631–1634. https://doi.org/10.1038/s41562–024–01969–7

Solomon, D. J., & Björk, B. (2012). Publication fees in open access publishing: Sources of funding and factors influencing choice of journal. Journal of the American Society for Information Science and Technology, 63(1), 98–107. https://doi.org/10.1002/asi.21660

Tóth, T., Demeter, M., Háló, G., Bozdag, U., & Bartóki-Gönczy, B. (2025). Time Matters: What Factors Affect Submission-To-Acceptance Time in the Journal of Communication? Communication Reports, 38(1), 13–24. https://doi.org/10.1080/08934215.2024.2424544

Tropini, C., Finlay, B. B., Nichter, M., Melby, M. K., Metcalf, J. L., Dominguez-Bello, M. G., Zhao, L., McFall-Ngai, M. J., Geva-Zatorsky, N., Amato, K. R., Undurraga, E. A., Poinar, H. N., & Gilbert, J. A. (2023). Time to rethink academic publishing: The peer reviewer crisis. mBio, 14(6), e01091–23. https://doi.org/10.1128/mbio.01091–23

Van Der Loo, M. P. J., & De Jonge, E. (2021). Data Validation Infrastructure for R. Journal of Statistical Software, 97(10). https://doi.org/10.18637/jss.v097.i10

Williams, P., & Wager, E. (2013). Exploring Why and How Journal Editors Retract Articles: Findings From a Qualitative Study. Science and Engineering Ethics, 19(1), 1–11. https://doi.org/10.1007/s11948–011–9292–0

Declarations

Ethics

No ethics approval was required or obtained.

Competing Interests

The authors declare that no conflicts of interest exist.

Funding

The study received no funding.

Author Contributions

Conceptualization – Balazs Aczel, Dominik Dianovics; Data curation – Dominik Dianovics, Marton Aron Varga; Formal analysis – Dominik Dianovics, Marton Aron Varga; Funding acquisition – Investigation – – Miklos Bognar, Dominik Dianovics, Marton Aron Varga; Methodology – Balazs Aczel, Miklos Bognar, Dominik Dianovics, Marton Aron Varga; Project administration – Balazs Aczel; Resources – Software – Dominik Dianovics, Marton Aron Varga; Supervision – Balazs Aczel, Miklos Bognar; Validation – Dominik Dianovics, Marton Aron Varga; Visualization – Dominik Dianovics, Marton Aron Varga; Writing – original draft – Dominik Dianovics, Marton Aron Varga; Writing – review and editing – Balazs Aczel, Miklos Bognar, Dominik Dianovics, Marton Aron Varga

Editors

Kathryn Zeiler
Editor-in-Chief

Wolfgang Kaltenbrunner
Handling Editor

Editorial assessment

by Wolfgang Kaltenbrunner

DOI: 10.70744/MetaROR.315.1.ea

This research plan presents an ambitious large-scale bibliometric study of acceptance and publication lags in scholarly publishing. Both reviewers highlight its relevance, strong data pipeline, and potential to make a valuable contribution to the science of science literature. At the same time, they identify several areas for improvement. They emphasize the need for a clearer conceptual framing, including more explicit motivation, sharper research questions, and a better structured introduction that clearly connects the various contributing factors. Methodological clarity should also be strengthened, particularly regarding the role and justification of variables, data sources, and potential biases (e.g., missing data and dataset limitations). In addition, the reviewers recommend broader engagement with relevant literature and contextual factors, as well as a more balanced framing of the publication system beyond an author-centric perspective. Overall, the study is seen as promising but would benefit from improved coherence, clearer positioning, and greater methodological transparency.

Competing interests: None.

Peer review 1

Alexander Schniedermann

DOI: 10.70744/MetaROR.315.1.rv1

The research plan “Acceptance and publication lags in scholarly journals between 2016 and 2025” describes a bibliometric project to systematically map and study the factors that shape the time dimension of the scholarly publishing process. In particular, the project investigates the acceptance lag (time from submission to acceptance) and the publication lag (the time from acceptance to publication) of around 9kk PubMed articles.
At this stage, the research plan provides good insight into an interesting project. The study background is rich and considers a wide range of perspectives on a complex phenomenon. In addition, an extensive and strong data pipeline was developed and well explained. Initial data retrieval and analysis code was provided. However, the project can be improved in the following ways.

The current study (report) lacks a clear motivation and concise research questions. The background suggests that longer publication time lags are a bad thing but does not really argue why such is the case, aside from some individual challenges (author frustration, mental well-being). Clarifying study rationale also helps to define more concise research questions and informs data selections and analytical decisions. In this regard, I am curious if the following dimensions have been considered:

  • The impact of preprinting on time lags, e.g., if already preprinted submissions experience a more relaxed publishing process, because claims are already public.
  • The role of the publisher on publication time lags. This topic is already mentioned in the background but not included as a variable in the analysis.
  • The impact of communication infrastructure and editorial systems on the acceptance lag.

In contrast, the aim to identify “broader, cross-disciplinary patterns” and the use of the five major WoS Subject Classifications (by the way, ASJC is from Scopus) might be revised given the fact that PubMed data is used as a baseline. Although PubMed covers several works from the social sciences, it does not represent the class “Social Sciences and Humanities” the same way as it represents “Life Sciences.” Changing to a different base data set seems infeasible, so I would suggest framing the whole project more towards scholarly communication in the life sciences.

The way how PubMed data was retrieved should be clarified. The text mentions the PubMed API (E-Utilities), while the GitHub code suggests retrieval from PubMed’s FTP service (bulk download). In any case, using PubMed data should be referenced by this paper: https://pubmed.ncbi.nlm.nih.gov/34850941/, rather than E-Utils docu.

Finally, and more with respect to the final project and its reporting, I recommend reconsidering the overall contextualization of the research problem. Currently, the background section focuses predominantly on the perspectives and interests of scientific authors and their inputs into the publishing process (e.g., “expected time to publication is highly relevant to authors” and the “frustrating” publication process). Relatedly, the background mentions the “inability(ies) of the reviewing system,” although it acknowledges how peer reviewers are overloaded with work. I’d also recommend discussing the role of evaluatory pressures and centering the whole framing more towards the already mentioned “alignment” and the question of what a healthy publication system would look like.

Competing interests: I have no personal or professional relationship to the authors. My own works (paper, data, code) were not referenced in the current manuscript and I did not suggest to include any reference to my work.

Peer review 2

Wenceslao Arroyo-Machado

DOI: 10.70744/MetaROR.315.1.rv2

This research plan is highly ambitious and points towards an investigation with the potential to yield very interesting and impactful results, which I look forward to seeing in due course. At the same time, it involves several elements that require further attention, particularly to ensure that the execution of the analysis is as robust as possible. Some of these aspects may already have been considered by the authors but are not explicitly reflected in the current document, or may simply benefit from additional clarification; for this reason, I outline them in detail below.

Strengths

The authors aim to analyse the evolution of peer review and publication processes over the last decade, in the context of multiple interfering factors ranging from mega-journals to the wave of COVID-19-related publications. This research plan is undoubtedly a meaningful contribution to the science of science literature. The authors also plan to employ robust statistical analyses that will not only allow these variations to be visualised and explored, but also quantified and tested, clarifying whether the influence of each factor is real and at what scale or with what concrete reach.

Structure and organisation

Whilst the authors’ proposal is clear and well-motivated, identifying several relevant contributing factors, their presentation throughout the introduction feels disorganised and insufficiently structured. This may seem a minor concern at the research plan stage, but it can genuinely affect the development of the paper for the following reasons:

  • The authors open by discussing the lag between submission and acceptance, but then shift to commenting on retractions without making the connection explicit. One can infer that retractions represent one of the visible and measurable consequences of a dysfunctional review process, but this is never stated. Beyond this, the multiple factors potentially contributing to publication lags are presented in a somewhat fragmented manner, as though they were independent, when the strength of this work lies precisely in modelling their joint influence. It is recommended that the introduction be reorganised to order these foundations clearly and distinguish between them.
  • In this regard, it is also analytically important to differentiate explicitly between structural factors sustained over time (such as open access models or journal metrics) and temporally bounded shocks such as COVID-19, whose effect on publication lags cannot be considered indefinite and should be treated accordingly in the analysis. Not all variables carry the same explanatory weight, and it is not always clear that the authors acknowledge this.
  • The authors should also make explicit, alongside their objectives, what changes or what actions become possible depending on the results obtained, something that is currently unclear. For instance, they could indicate that the analysis will allow the identification of fields or journals particularly affected by delays, highlight journals with stable patterns as examples of good practice (assuming variation is understood as something undesirable), or detect shifts in publication timing not attributable to any known factor. Given that the contributing factors are already largely known in the literature, the primary added value of this study lies in its scale and cross-disciplinary scope, and making this explicit would help frame the research more effectively and communicate its relevance more clearly.
  • Finally, the literature review omits several documented factors, including the reviewer-editor relationship (https://doi.org/10.1007/s11192-017-2309-y) and editorial arbitrariness (https://doi.org/10.1007/s11192-020-03348-1). It also makes no mention of cascading peer review, whereby a manuscript transferred immediately after rejection to another journal can accumulate editorial time that is not observable in PubMed, artificially inflating the recorded lags. It is recommended that these factors be incorporated into the literature review, even where they cannot be modelled directly, with an explicit acknowledgement of their potential influence and the limitations this imposes on the analysis.

Variables

The role of the variables in the methodology is not sufficiently clear and warrants more explicit discussion. Several specific concerns arise:

  • First, no distinction is made between variables acting as controls and those treated as independent objects of analysis. The most notable case is that of retractions. As noted above in relation to the introduction, one can infer that they may point towards an analysis of the consequences of publication lags, but this is never specified. If that is the intention, it should be stated explicitly; if it is not, their analytical role remains equally unclear.
  • Second, there is a complete absence of bibliographic variables such as document type, study design (PubMed offers a notably exhaustive classification in this regard), language, or manuscript length. Some of these have documented effects on publication timelines—for example, the language of publication (https://doi.org/10.1002/pra2.26). Their omission may introduce confounding into the results.
  • Third, the combined use of SJR, Web of Science, and the Norwegian Publication Indicator is not sufficiently justified. While these sources serve nominally different purposes, it is reasonable to expect that the information they capture overlaps considerably, and that merging them will reduce the effective dataset size. If Web of Science categories are used to control for subject area, the authors should assess whether SJR adds explanatory value not already captured, or whether a more granular article-level classification such as that provided by OpenAlex topics might be preferable, both for precision and to avoid the dataset reduction that results from cross-source merging.

Limitations and further considerations

The study carries several important limitations that are not discussed and which merit attention.

  • The paper analyses the period up to 2025 but makes no mention of the emergence of large language models from late 2022 onwards and their progressive adoption throughout 2023 and beyond. The effects of LLMs on scientific writing, peer review, and editorial workflows are increasingly documented, and there is growing evidence that they have altered submission volumes, manuscript quality distributions, and potentially reviewer behaviour in ways that may be reflected in publication timelines. If the GAM models identify an unexplained trend shift in the post-2022 period, the authors will lack any analytical framework to interpret it. The authors should at minimum acknowledge this factor and discuss to what extent it may be affecting the patterns observed in the final years of the study period, or, ideally, consider whether it warrants inclusion as an additional variable or a justification for a temporal segmentation of the analysis.
  • The submission date is not recorded for all articles in PubMed, as the authors clearly acknowledge. However, no mention is made of the fact that this may generate substantial gaps that bias the results. Journals that do report these dates may be systematically different in terms of discipline, impact, or access model. It is therefore recommended not only to acknowledge this limitation but to dedicate a brief robustness analysis to reporting the pattern of missingness and discussing its potential impact on the representativeness of the findings. A similar concern applies when the PubMed records are filtered through the combination of multiple external sources, each of which will introduce its own coverage gaps.
  • Finally, PubMed indexes primarily biomedical and health sciences literature, which substantially limits the generalisability of conclusions regarding disciplinary differences or cross-field patterns. The authors should explicitly acknowledge this coverage bias and circumscribe the scope of their conclusions accordingly. This does not diminish the relevance of their findings, but the reach of any cross-disciplinary claims must be clearly bounded.

Competing interests: None.

Leave a comment