Research Articles

Research Articles

  • Article

    Steffen Lemke, Isabella Peters

    Bronze OA is an under-researched facet of open access (OA) surrounded by many uncertainties regarding its causes and their individual significance. This study aims to reduce these uncertainties by providing an overview over the state of Bronze OA within Web of Science, and by investigating relationships between research fields, publishers, and their rates of Bronze OA. We analyze 3,943,511 Bronze OA publications registered in Web of Science since 2000, applying statistical analyses and heat maps. Our results show high occurrences of Bronze OA in biologic and medical fields, while arts & humanities, engineering sciences, physics, and chemistry comprise the lower end of the spectrum. Regarding publishers, large university presses and the BMJ Group stand out as heavy users of Bronze OA. Continuations of this study will semi-manually investigate article pages of Bronze OA publications to provide empirical evidence for the significance of different speculated reasons for the Bronze OA label.

    Curated 

    | February 25, 2026

  • Article

    Hélène Draux, Briony Fane, Daniel W. Hook, Juergen Wastl, Philip Lewis, Molly Morgan Jones, Pablo Roblero, James R. Wilsdon

    The UK has a long-established reputation for excellence in research across a broad range of fields, but in recent years, there has been greater emphasis on STEM investment and greater recognition of the UK's success in STEM. This paper examines the relative strengths of SHAPE disciplines and demonstrates that the UK's SHAPE research portfolio outperforms the UK's STEM research, for each international benchmark considered in this work. It is argued that SHAPE research is becoming increasingly important as a partner to STEM as the widespread use of technology creates societal challenges. It is also argued that the strength of UK SHAPE is the basis of a strategic advantage for UK research.

    Curated 

    | February 24, 2026

  • Article

    Susana Oliveira Henriques, Narmin Rzayeva, Stephen Pinfield, Ludo Waltman

    Preprinting has gained considerable momentum, and in some fields it has turned into a well-established way to share new scientific findings. The possibility to organise quality control and peer review for preprints is also increasingly highlighted, leading to the development of preprint review services. We report a descriptive study of preprint review services with the aim of developing a systematic understanding of the main characteristics of these services, evaluating how they manage preprint review, and positioning them in the broader scholarly communication landscape. Our study shows that preprint review services have the potential to turn peer review into a more transparent and rewarding experience and to improve publishing and peer review workflows. We are witnessing the growth of a mixed system in which preprint servers, preprint review services and journals operate mostly in complementary ways. In the longer term, however, preprint review services may disrupt the scholarly communication landscape in a more radical way.

    Curated 

    | February 11, 2026

  • Article

    Aura Kivilaakso, Henna Moore, Johanna Kolhinen

    In 2025, the University of Helsinki is conducting a comprehensive research assessment (RAUH 2025) with a focus on interdisciplinarity. The purpose of RAUH 2025 is to produce an overall view of the quality and impact of research at the University of Helsinki, identify emerging and strong research areas, and support the renewal of research. Capturing interdisciplinarity in the assessment has been a challenge, as the university lacks data that illuminates the integration of disciplines in research. At the University of Helsinki, research assessment prioritizes qualitative data and emphasizes collaboration and learning in the assessment process. Therefore, a dedicated module on interdisciplinarity was designed for the RAUH 2025 assessment. Interdisciplinarity was explored in a separate university-level self-assessment exercise. This paper discusses the design and benefits of the interdisciplinarity module, the academic community's participation in the process, and the support provided by Research Services before and during the self-assessment period.

    Curated 

    | February 11, 2026

  • Article

    Paweł Lenartowicz

    Publication bias poses a serious challenge to the clarity and precision of scientific research and meta-analyses. There exist persistent methodological obstacles for estimating this bias, especially with heterogeneous datasets, where studies vary widely in methodologies and effect sizes. To address this gap, I propose a Likelihood Ratio Test for Publication Bias, a statistical method designed to detect and quantify publication bias in datasets of heterogeneous study results. This new method demonstrates higher precision and reliability compared to existing methods, though challenges such as varying research practices and large sample size requirements remain.

    Curated 

    | January 30, 2026

  • Article

    This is a research plan to evaluate whether the CONICET-Argentina research data platform improves the integrity of research data produced by Research Support Staff (RSS).

    Curated 

    | January 6, 2026

  • Article

    Konrad Hinsen

    Since its beginnings in the 1940s, automated reasoning by computers has become a tool of ever growing importance in scientific research. So far, the rules underlying automated reasoning have mainly been formulated by humans, in the form of program source code. Rules derived from large amounts of data, via machine learning techniques, are a complementary approach currently under intense development. The question of why we should trust these systems, and the results obtained with their help, has been discussed by early practitioners of computational science, but was later forgotten. The present work focuses on independent reviewing, an important source of trust in science, and identifies the characteristics of automated reasoning systems that affect their reviewability. It also discusses possible steps towards increasing reviewability and trustworthiness via a combination of technical and social measures.

    Curated 

    | December 22, 2025

  • Article

    Lutz Bornmann, Christian Leibel

    Citation analysis is widely used in research evaluation to assess the impact of scientific papers. These analyses rest on the assumption that citation decisions by authors are accurate, representing flow of knowledge from cited to citing papers. However, in practice, researchers often cite for reasons other than attributing intellectual credit to previous research. Citations made for rhetorical reasons or without reading the cited work compromise the value of citations as instrument for research evaluation. Past research on threats to the accuracy of citations has mainly focused on citation bias as the primary concern. In this paper, we argue that citation noise - the undesirable variance in citation decisions - represents an equally critical but underexplored challenge in citation analysis. We define and differentiate two types of citation noise: citation level noise and citation pattern noise. Each type of noise is described in terms of how it arises and the specific ways it can undermine the validity of citation-based research assessments. By conceptually differing citation noise from citation accuracy and citation bias, we propose a framework for the foundation of citation analysis. We discuss strategies and interventions to minimize citation noise, aiming to improve the reliability and validity of citation analysis in research evaluation. We recommend that the current professional reform movement in research evaluation such as the Coalition for Advancing Research Assessment (CoARA) pick up these strategies and interventions as an additional building block for careful, responsible use of bibliometric indicators in research evaluation.

    Curated 

    | December 10, 2025

  • Article

    Bence Orkeny

    The “replication crisis” has sparked extensive debate about core epistemic issues in contemporary research practices. Central to this debate is the epistemic function of replication: how does replication contribute to knowledge, validity, and the structure of scientific inquiry? This paper offers a structured overview and analysis of 15 influential contributions to this debate. I categorize these papers along three key domains of focus: the relation between a replication and the original study; the role of replication within specific scientific disciplines; and its significance for science as a whole. My analysis reveals two notable research gaps. First, few articles address all three domains in an integrated way. Second, the literature appears to follow a temporal trajectory - from early enthusiasm, through critical reflection, to a more balanced reengagement - which merits further attention. Overall, this essay offers an initial analysis of the literature concerned with the epistemic function of replication.

    Curated 

    | November 19, 2025

  • Article

    Alex Hulkes, Cillian Brophy, Ben Steyn

    For a variety of reasons, including a need to save time and a desire to reduce biases in outcomes, some funders of research have started to use partial randomisation in their funding decision processes. The effect that randomisation interventions have on the reliability of those processes should, it is argued, be a consideration in their use, but this key aspect of their implementation remains under-appreciated. Using a simple specification of a research proposal peer review process, simulations are carried out to explore the ways in which decision reliability, bias, extent of decision randomisation and other factors interact. As might be expected, based on both logic and existing knowledge, randomisation has the potential to reduce bias, but it may also reduce decision reliability as inferred from the F1 score and accuracy of a simulated binary (successful, rejected) decision outcome classification process. Bias is also found, in one sense and qualitatively, to be rather insensitive to partial randomisation as it is typically applied in real-world situations. The simple yet apparently effective specification of the simulation of reviewer scores implemented here may also provide insights into the distribution of merit across research funding proposals, and of assessment of them.

    Curated 

    | November 13, 2025