Research Articles
Research Articles
Article
This is a research plan to evaluate whether the CONICET-Argentina research data platform improves the integrity of research data produced by Research Support Staff (RSS).
Curated
| January 6, 2026
Article
Konrad Hinsen
Since its beginnings in the 1940s, automated reasoning by computers has become a tool of ever growing importance in scientific research. So far, the rules underlying automated reasoning have mainly been formulated by humans, in the form of program source code. Rules derived from large amounts of data, via machine learning techniques, are a complementary approach currently under intense development. The question of why we should trust these systems, and the results obtained with their help, has been discussed by early practitioners of computational science, but was later forgotten. The present work focuses on independent reviewing, an important source of trust in science, and identifies the characteristics of automated reasoning systems that affect their reviewability. It also discusses possible steps towards increasing reviewability and trustworthiness via a combination of technical and social measures.
Curated
| December 22, 2025
Article
Lutz Bornmann, Christian Leibel
Citation analysis is widely used in research evaluation to assess the impact of scientific papers. These analyses rest on the assumption that citation decisions by authors are accurate, representing flow of knowledge from cited to citing papers. However, in practice, researchers often cite for reasons other than attributing intellectual credit to previous research. Citations made for rhetorical reasons or without reading the cited work compromise the value of citations as instrument for research evaluation. Past research on threats to the accuracy of citations has mainly focused on citation bias as the primary concern. In this paper, we argue that citation noise - the undesirable variance in citation decisions - represents an equally critical but underexplored challenge in citation analysis. We define and differentiate two types of citation noise: citation level noise and citation pattern noise. Each type of noise is described in terms of how it arises and the specific ways it can undermine the validity of citation-based research assessments. By conceptually differing citation noise from citation accuracy and citation bias, we propose a framework for the foundation of citation analysis. We discuss strategies and interventions to minimize citation noise, aiming to improve the reliability and validity of citation analysis in research evaluation. We recommend that the current professional reform movement in research evaluation such as the Coalition for Advancing Research Assessment (CoARA) pick up these strategies and interventions as an additional building block for careful, responsible use of bibliometric indicators in research evaluation.
Curated
| December 10, 2025
Article
Bence Orkeny
The “replication crisis” has sparked extensive debate about core epistemic issues in contemporary research practices. Central to this debate is the epistemic function of replication: how does replication contribute to knowledge, validity, and the structure of scientific inquiry? This paper offers a structured overview and analysis of 15 influential contributions to this debate. I categorize these papers along three key domains of focus: the relation between a replication and the original study; the role of replication within specific scientific disciplines; and its significance for science as a whole. My analysis reveals two notable research gaps. First, few articles address all three domains in an integrated way. Second, the literature appears to follow a temporal trajectory - from early enthusiasm, through critical reflection, to a more balanced reengagement - which merits further attention. Overall, this essay offers an initial analysis of the literature concerned with the epistemic function of replication.
Curated
| November 19, 2025
Article
Alex Hulkes, Cillian Brophy, Ben Steyn
For a variety of reasons, including a need to save time and a desire to reduce biases in outcomes, some funders of research have started to use partial randomisation in their funding decision processes. The effect that randomisation interventions have on the reliability of those processes should, it is argued, be a consideration in their use, but this key aspect of their implementation remains under-appreciated. Using a simple specification of a research proposal peer review process, simulations are carried out to explore the ways in which decision reliability, bias, extent of decision randomisation and other factors interact. As might be expected, based on both logic and existing knowledge, randomisation has the potential to reduce bias, but it may also reduce decision reliability as inferred from the F1 score and accuracy of a simulated binary (successful, rejected) decision outcome classification process. Bias is also found, in one sense and qualitatively, to be rather insensitive to partial randomisation as it is typically applied in real-world situations. The simple yet apparently effective specification of the simulation of reviewer scores implemented here may also provide insights into the distribution of merit across research funding proposals, and of assessment of them.
Curated
| November 13, 2025
Article
Federico Vasen, Nerina F. Sarthou, Silvina A. Romano
Current research assessment reform movements advocate for recognizing the full range of academic contributions, including outputs beyond traditional publications. This article examines how technological activities and products (TAPs) are evaluated within Argentina’s CONICET research career system. Drawing on 421 peer review reports in applied fields (veterinary science, civil engineering, and computer science), we analyze the place of TAPs in evaluation discourse and the challenges they pose. Combining lexicometric and qualitative analysis, this research shows that TAPs are rarely central to evaluations, often overshadowed by publications and other conventional outputs. Where TAPs are addressed, reviewers express difficulties due to insufficient documentation, inconsistencies across sources, and uncertainty about originality or relevance. These issues reflect both procedural shortcomings and a lack of shared criteria. Our findings show that the goal of valuing diverse contributions remains difficult to implement in practice: TAPs represent a category where consensus is weak, and expectations are unclear. Reform efforts must distinguish between technical limitations in evaluation design and deeper disagreements over what constitutes academic merit—especially in applied research contexts. We argue that the evaluation of TAPs constitutes a distinct object of study, analytically separate from social impact, and deserving of specific attention within broader debates on responsible research assessment.
Curated
| November 6, 2025
Article
Alexander Rushforth, Gunnar Sivertsen, James Wilsdon, [...], Lin Zhang
How are national systems for assessing publicly funded research evolving? What purposes do they serve and how are they designed to fulfil these? This working paper surveys the landscape of national research assessment and funding systems across thirteen countries from 2010 to 2024, and makes three contributions to our understanding of these systems. First, we advance a new typology to categorize and compare important characteristics of these systems, providing insights into their similarities and differences, and a basis for mutual learning. Second, we identify and compare important shifts over time across the thirteen systems through the framework of three dynamic and interacting research performance paradigms. These point to a gradual shift away from narrow conceptions of research ‘excellence’ towards more holistic criteria of value, qualities and impacts across several systems – though not all. Finally, we consider potential trajectories over the next decade: including how a variety of assessment systems might respond to and incorporate responsible research assessment (RRA) movements for reform. By mapping the landscape of research assessment systems across countries and identifying dynamics of change, this paper offers insights for policymakers, research funders and institutional leaders looking to navigate this terrain at a time of shifting expectations.
Curated
| November 5, 2025
Article
Achal Agrawal, Moumita Koley
Nature journal recently announced that all their reviews will be henceforth open. This is a good first step that will hopefully extend to other journals in Springer Nature as well as other major publishers. It will go a long way in improving trust in Science. They will also allow reviewers to choose to be anonymous. Blind peer review is one of the pillars of the publication system and the choice of reviewers needs to be respected.
However, there have been a lot of reported cases where reviewers abuse this anonymity to ask for citations to their own articles, sometimes not even related to the paper being reviewed. Increasingly, a lot of reviews are also done by AI, leading to low quality feedback.
To mitigate these issues, one can make the reviews Psuedo-Anonymous, i.e. the reviews should be traceable. Research integrity teams can then verify the bad ‘actors’. It will also act as a deterrent for people providing shoddy, lazy or unfair reviews.
Additionally, it will help provide useful analytics for both editors as well as readers and help them take more informed decisions.
Curated
| October 22, 2025
Article
Serhii Nazarovets
This study examines the effect of article processing charge (APC) waivers on the participation of Ukrainian researchers in fully Gold Open Access (Gold OA) journals published by the five largest academic publishers - Elsevier, SAGE, Springer Nature, Taylor & Francis, and Wiley - during the period 2019-2024. These publishers were selected because, in response to the full-scale war launched against Ukraine in 2022, all five introduced emergency 100% APC-waiver policies for Ukrainian authors. Using bibliometric data from the Web of Science Core Collection, the study analyses publication trends in Ukrainian-authored articles in fully Gold OA journals of these publishers before and after 2022. The results show a marked post-2022 increase in Ukraine's Gold OA output, particularly in journals published by Springer Nature and Elsevier. Disciplinary and publisher-specific patterns are evident, with especially strong growth in the medical and applied sciences. The findings underscore the potential of targeted support measures during times of crisis, while also illustrating the inherent limitations of APC-based publishing models in fostering equitable scholarly communication.
Curated
| October 13, 2025
Article
Alexander Schniedermann
Modern biomedical research is becoming increasingly industrialized, with standardized practices playing a central role in its evolution. Since the 1990s, reporting guidelines have introduced structured templates for scientific writing, aimed at ensuring quality and consistency in publications. These guidelines, such as PRISMA, were rapidly adopted across diverse biomedical domains and are now widely used by researchers around the world. However, their impact extends far beyond the formal standardization of writing. Because they are tailored to specific types or genres of research, reporting guidelines exert a deeper influence on scientific practice. Interviews with authors of systematic reviews who used PRISMA reveal a varied and complex landscape of usage. Rather than applying the guideline solely for reporting purposes, researchers integrate it into multiple stages of their workflow, including study design, data analysis, and peer review. In doing so, they transform PRISMA into a practical and versatile tool that supports the entire research process. Yet, its use is not uniform. Instead, researchers assign different roles to the guideline depending on individual preferences and contextual factors which, together, are not yet fully systematized or understood. This raises important questions about what exactly is being standardized through reporting guidelines. Moreover, PRISMA’s social implications and the range of tasks it influences suggest that it shapes systematic reviewing as a new profession or a distinct set of tasks within the broader biomedical field.
Curated
| October 9, 2025












