Research Articles
Research Articles
Article
Adrian Barnett
Background: The pervasive mantra to “publish or perish” means that some researchers prioritise quantity over quality. In a rush to obtain papers, researchers neglect to thoroughly check their writing. Spelling errors are a sign of rushed practice, and hence a potential indicator of poor research quality. Objective: To examine spelling errors in published abstracts and estimate the trend over time and potential predictors of errors.
Methods: We used an observational study of the health and medical literature available on PubMed and OpenAlex between 2008 and 2024. We searched titles and abstracts for a list of more than 4,000 common spelling errors used by Wikipedia. As a comparison group, we randomly selected control abstracts that did not have errors. We used regression to examine predictors of spelling errors and whether papers with more spelling errors had fewer citations.
Results: We detected 48,420 spelling errors in more than 21 million abstracts. The trend in error rates increased slightly until 2016, after which it decreased. Papers with more authors had fewer errors on average. There were large differences in spelling error rates according to the first author’s country and the publisher. Abstracts with two spelling errors had an estimated 8% reduction in citations compared to abstracts without errors.
Conclusions: Spelling errors did not increase consistently during the study period, providing no evidence of a general worsening in research quality. The recent reduction in spelling error rates could be due to an increased use of large language models in paper writing. Abstracts with spelling errors had fewer citations, indicating a link between spelling errors and overall research quality.
Curated
| July 29, 2025
Article
Dmitry Kochetkov
Peer review has long been regarded as a cornerstone of scholarly communication, ensuring high quality and credibility of published research. However, its institutionalization in the mid-20th century contrasts sharply with the 300-year history of academic journals. By the early 21st century, there emerged an opinion that the conventional model of peer review faces systematic challenges, including inefficiency, bias, and institutional inertia. The study aims to synthesize the evolution, practices, and outcomes of both conventional and innovative peer review models in scholarly publishing. Through a mixed-methods approach combining interpretative literature review and process modeling (BPMN), it identifies four frameworks: pre-publication peer review, registered reports, modular publishing, and the Publish-Review-Curate (PRC) model. While the PRC model, which integrates preprints with post-publication review, demonstrates advantages in transparency and accessibility, no single approach emerges as universally ideal. The choice of model depends on disciplinary context, resource availability, and institutional priorities. The analysis underscores the need for adaptable platforms that enable hybrid workflows, balancing rigor with inclusivity. Future research must address empirical gaps in evaluating these innovations, particularly their long-term impact on equity and epistemic norms.
Curated
| July 22, 2025
Article
Juan Pablo Bascur, Rodrigo Costas, Suzan Verberne
Traditional science maps visualize topics by clustering documents, but they are inherently biased toward clustering certain topics over others. If these topics could be chosen, then the science maps could be tailored for different needs. In this paper, we explore the use of document networks from diverse data sources as a tool to control the topic clustering bias of a science map. We analyze this by evaluating the clustering effectiveness of several topic categories over two traditional and six non-traditional data sources. We found that the topics favored in each non-traditional data source are about: Health for Facebook users, biotechnology for patent families, government and social issues for policy documents, food for Twitter conversations, nursing for Twitter users, and geographical entities for document authors (the favoring in this latter source was particularly strong). Our results show that diverse data sources can be used to control topic bias, which opens up the possibility of creating science maps tailored for different needs.
Curated
| July 16, 2025
Article
Nicki Lisa Cole, Sven Ulpts, Agata Bochynska, Eva Kormann, Matthew Good, Barbara Leitner, Tony Ross-Hellauer
The integrative review presented here examines how reproducibility and replicability are conceptualized and discussed in relation to qualitative research, and which factors and practices enable or undermine them. Both peer-reviewed and grey English-language literature that address reproducibility and/or Open Science in relation to qualitative research were eligible for inclusion. Initial searches were conducted in Scopus, Web of Science, Dimensions, PubMed, APA PsychInfo, and JSTOR, and followed with snowball sampling from included literature. Studies were screened and both quantitative and qualitative data were extracted using the SyRF online platform, with 248 papers included. We found that conceptualizations that stem from quantitative standpoints are overwhelmingly framed as inappropriate practices and epistemic criteria for (most) qualitative research. When conceptualized in alternative ways that are adapted to the epistemic conditions, aims and practices of qualitative research, they can be both applicable and appropriate. Key barriers include the ontological and epistemological misalignment of reproducibility, replicability and Open Science and qualitative research, and ethical and practical concerns surrounding data sharing and reuse. Key enablers include practices that respond to ethical and practical concerns around data sharing and reuse (anonymization, ethical consent practices, context documentation, and ethical access management), adapting expectations and norms of openness, and established qualitative practices including documentation, reflexivity, and considering positionality. We conclude that reproducibility, replicability and Open Science practices must be adapted to the aims and epistemic conditions of qualitative research for them to be applicable and feasible, and that they will not always be both for all qualitative research.
Curated
| July 7, 2025
Article
Isabel Abedrapo Rosen, Ricardo Hartley Belmar
While Open Science emphasises openness and reproducibility, governance documentation does not necessarily incorporate these features. It raises concerns, especially compared to government policy mandates emphasising transparency and accountability. Persistent identifiers (PIDs) play a crucial role in enabling the discoverability, accessibility, and traceability of scholarly outputs. However, PIDs see widespread adoption among individual practitioners but slower adoption within institutional and regulatory bodies. This discrepancy leads to uneven metadata usage and highlights the need for a more unified approach to PIDs across the scholarly ecosystem. This essay analyses 46 Open Science governance documents to pinpoint essential areas for improvement. The inconsistencies across documents, the absence of digital object identifiers (DOIs), and varied recognition ability by bibliographic managers underscore the urgent need for standardisation. Embracing Open Science offers a promising avenue to unify stakeholders in a collective push towards bolstering the integrity and efficiency of research, thereby ensuring more robust governance.
Curated
| June 26, 2025
Article
Erin C. McKiernan, Caitlin Carter, Michael R. Dougherty, Greg Tananbaum
Recent years have seen a growing dissatisfaction with how academics and their scholarly work are evaluated, and a corresponding global proliferation of initiatives dedicated to assessment reform. A common theme across many of these initiatives is a call to center values, focusing on how incentives could be designed to better reward aspects like collaboration, equity, rigor, and transparency. While such values-based approaches have laid solid groundwork for academic institutions to think through and prioritize their values, we see a need for granular tools that can help institutions transform their values into actionable reforms. To that end, we present a framework, developed in part through workshops we ran at the 2023 Council of Graduate Departments of Psychology (COGDOP) Annual Meeting, the Association for Psychological Science (APS) Annual Convention, and the American Anthropological Association (AAA) Department Leaders Summer Institute. The framework includes 14 values (e.g. creativity, inclusivity, engagement, public good), and for each value, we outline some scoping considerations, representative academic activities or scholarly outputs, and possible behavioral indicators that could be incorporated into promotion and tenure evaluations. This framework is not exhaustive, and will likely vary depending on disciplinary or other contexts, but we hope it will serve as a starting point and encourage institutions to tackle assessment reforms with a values-based lens.
Curated
| June 20, 2025
Article
Natalia Gonzalez Bohorquez, Sucharitha Weerasuriya, David Brain, Sameera Senanayake, Sanjeewa Kularatna, Adrian Barnett
The research community’s fixation on journal prestige is harming research quality, as some researchers focus on where to publish instead of what. We examined researchers’ publication preferences using a discrete choice experiment in a cross-sectional survey of international health and medical researchers. We asked researchers to consider two hypothetical journals and decide which they would prefer. The hypothetical journals varied in their impact factor, formatting requirements, speed of peer review, helpfulness of peer review, editor’s request to cut results, and whether the paper would be useful for their next promotion. These attributes were designed using focus groups and interviews with researchers, with the aim of creating a tension between personal and societal benefit. Our survey found that researchers’ strongest preference was for the highest impact factor, and the second strongest for a moderate impact factor. The least important attribute was a preference for making changes in format and wording compared with cutting results. Some respondents were willing to cut results in exchange for a higher impact factor. Despite international efforts to reduce the importance of impact factor, it remains a driver of researchers’ behaviour. The most prestigious journals may have the most partial evidence, as researchers are willing to trade their results for prestige.
Curated
| June 3, 2025
Article
Hans de Jonge, Bianca Kramer, Jeroen Sondervan
Transformative agreements have become an important strategy in the transition to open access, with almost 1,200 such agreements registered by 2025. Despite their prevalence, these agreements suffer from important transparency limitations, most notably article-level metadata indicating which articles are covered by these agreements. Typically, this data is available to libraries but not openly shared, making it difficult to study the impact of these agreements. In this paper, we present a novel, open, replicable method for analyzing transformative agreements using open metadata, specifically the Journal Checker tool provided by cOAlition S and OpenAlex. To demonstrate its potential, we apply our approach to a subset of publications funded by the Dutch Research Council (NWO) and its health research counterpart ZonMw. In addition, the results of this open method are compared with the actual publisher data reported to the Dutch university library consortium UKB. This validation shows that this open method accurately identified 89% of the publications covered by transformative agreements, while the 11% false positives shed an interesting light on the limitations of this method. In the absence of hard, openly available article-level data on transformative agreements, we provide researchers and institutions with a powerful tool to critically track and evaluate the impact of these agreements.
Curated
| May 14, 2025
Article
John Chodacki, Mark Hanhel, Stefano Iacus, Ryan Scherle, Eric Olson, Nici Pfeiffer, Kristi Holmes, Mohammad Hosseini
The Generalist Repository Ecosystem Initiative (GREI), funded by the NIH, developed an AI taxonomy tailored to data repository roles to guide AI integration across repository management. It categorizes the roles into stages, including acquisition, validation, organization, enhancement, analysis, sharing, and user support, providing a structured framework for implementing AI in repository workflows.
Curated
| April 4, 2025
Article
Jason M. Chin, Stephanie Clayton, Stephen Cordner, Gary Edmond, Bethany Growns, Kylie Hunter, Bernard I’Ons, Kristy A. Martire, Gianni Ribeiro, Stephanie Summersby
This protocol aims to address two metaresearch questions: what do we know about the science underlying impactful legal decisions and how can we assess this evidence efficiently and accurately, such that it is usable for courts? To do so, this protocol describes our plan for a systematic review of the literature underlying whether there are reliable indicators of the manner of death in stab wound cases. Evidence of nature was recently found admissible at the High Court of Australia in Lang v The Queen. However, despite the High Court uncritically accepting this evidence, there has never been a systematic review that critically appraises the relevant literature. Beyond the review itself, we will fully document our process and all of the time-saving tools we rely on to provide knowledge about whether systematic reviews of this nature can be efficiently deployed in legal proceedings.
Curated
| March 20, 2025












