Introduction
The CRediT (Contributor Roles Taxonomy) standard was created to communicate the contributions of individuals to scientific projects. Initially developed by a consortium of researchers, publishers, and funders, it became an official National Information Standards Organization (NISO) standard in 2022 (NISO CRediT Working Group, 2022). CRediT is a high-level taxonomy that consists of 14 different roles: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing—original draft, and Writing—review & editing. Today, CRediT information is solicited by thousands of scientific journals from the authors when they submit a manuscript. Machine-readable metadata are included in the CRediT standard, and for journals that publish the metadata, scholarly databases can more easily digest the information, increasing the potential for its use by funders, hiring committees, meta-scientists, and others.
Because of its widespread use, it is possible that CRediT has advanced contributorship, the provision of information about who did what, more than any other scientific publishing initiative. By making information about researchers’ contributions explicit, we can reduce the tendency for readers to attribute the lion’s share of the work in a project to the most senior or famous contributors to a project (Allen et al., 2019; Kovacs et al., 2021). An additional advantage of contributorship is that it improves accountability for individual aspects of the research. If questions are raised about aspects of a project, it is clearer which authors are responsible for that aspect (e.g., a data collection or an analysis). Because of the recent unfortunate rise in scientific fraud allegations, this is increasingly important.
Here, our primary aim is to describe our project to translate CRediT from English into other languages. A secondary aim is to broaden awareness among non-English communities regarding options for acknowledge contributors (not just CRediT). We do this to advance our overarching goal of improving the attribution of research work to the people who do that work.
History and Implementation of CRediT in English
In the 1990s, Drummond Rennie and others pointed out that the rise in the number of authors per scientific paper diluted the information provided by conventional author lists, and that papers should provide explicit information about who did what in a project (Rennie, 1997). Many journals subsequently began requiring that their authors provide some information about the roles of each co-author, but different journals used different lists of roles or left the information entirely free-form (Atkins, 2016).
After a consortium of biomedical researchers, funders, and publishers formed to create the CRediT standard, several major scholarly publishers implemented it as part of their submission and publication process. Typically, authors are presented with checkboxes allowing them to indicate which of the 14 CRediT categories each co-author contributed to. For example, Aries Systems Editorial Manager provides this for its customizable workflows used by Elsevier, Springer-Nature, PLOS, Wolters Kluwer, Wiley, Cell Press, Taylor & Francis, and Cambridge University Press (Aries, 2025). When this integration is enabled, the system collects information on types and levels of contribution for each author of the associated manuscript, and can publish it as manuscript meta-data. An additional option suggested by CRediT is that each contributor to a category can be marked as the “lead,” “equal,” or “supporting” contributor to that CRediT category. Most publishers using CRediT have not implemented this feature, but some have, such as the American Chemical Society (McGonagle-O’Connell, 2022).
In a survey sent to authors of scientific papers in 2013, 85% of respondents reported that the CRediT taxonomy was easy to use and that it covered all the roles of contributors to their paper (Allen et al., 2014). In a different survey, of Elsevier authors, 71% agreed it was a positive step toward improved transparency and recognition around published work (Genova, 2023). It should be noted, however, that CRediT was designed primarily for biomedical research and may not fit other fields well.
Typically, CRediT statements are provided only for the authors of a journal article. As we will return to in the Discussion, this feature may hinder equitable recognition of contributions as a person who does not satisfy commonly-used authorship criteria may nevertheless deserve mention as a contributor in a CRediT statement (Holcombe, 2019). For example, a student researcher heavily involved in study design and data collection but unable or unwilling to contribute to writing the manuscript will not satisfy the International Committee on Medical Journal Editors’ authorship criteria, but has nevertheless made a noteworthy contribution (International Committee of Medical Journal Editors, 2013). Similarly, library and information professionals who contribute important information for systematic reviews are often not mentioned or relegated to Acknowledgments sections (Brunskill & Hanneke, 2022).
The implementation of CRediT by large publishers typically entails two aspects. First, the manuscript submission site of their journals allows (or requires) authors to indicate which of the 14 CRediT roles each co-author contributed to. Second, upon publication, the associated CRediT information is provided not only as text somewhere in the article, but also as metadata in the JATS-XML version. JATS stands for Journal Article Tag Suite, providing the structure for the metadata that can be embedded in the webpage of an article (Beck, 2011).
When a publisher provides CRediT metadata (through JATS-XML) in their articles, the CRediT metadata can be fed into scholarly databases. The most popular service for communicating information to scholarly databases, CrossRef, is working on implementing CRediT (CRediT Taxonomy/Author Contribution – Content Registration, 2024). This will be a major advance in sharing and leveraging CRediT data, as the databases that use CrossRef are used extensively by research administrators and funders. For example, funders will be able to better understand the varied contributions made to multi-author projects. Realizing this benefit, however, will require scholarly databases to be modified to digest the CRediT metadata.
Many journals use publication systems that currently lack the capability to create JATS-XML, and thus do not provide CRediT metadata. Many also do not use submission systems that solicit the CRediT information from authors. In such cases, however, the journal may still request that authors enter CRediT information, simply in a free-text format that then is published in a section such as the “Acknowledgments” or “Author Note” (e.g. Meta-psychology editors, n.d.). Large language models and other natural language processing systems may be capable of parsing that type of rather regular text, which scholarly databases could perhaps employ.
CRediT is officially available only in English. This limits its use in non-English science, which we will argue is important.
Non-English Science
English is the dominant language for formal communication among scientists (Popova & Beavitt, 2017), but thousands of journals published in other languages also play a significant role within associated communities. An analysis of 25,671 journals published using the open-source Open Journal Systems (OJS) software found 60 different publication languages (Khanna et al., 2022). Scientific journals based in communities with less resources are particularly likely to use Open Journal Systems, as it is free to install and use.
A reason that non-English scientific discourse is important is that science needs to communicate with local communities and stakeholders (Lujano & OASPA, 2023). The existence of non-English journals can be especially crucial for researchers based in the Global South, because their scholarship is often most relevant for regional communities. In addition, such researchers may have trouble publishing in English-language journals due to research culture, economic barriers, or language difficulties (Chowdhury et al., 2022; Turba et al., 2025; Valenzuela-Toro & Viglino, 2021).
For the Open Journal Systems journals studied by Khanna et al. (2022), only 50% of recently published articles were in English, followed by Indonesian (23.0%), Spanish (11.4%), and Portuguese (9.8%). This, together with other evidence, suggests that there are tens of thousands of scholarly peer-reviewed articles each year that are published in languages other than English (Liu, 2017).
Although many non-English articles are in disciplines such as the humanities, where the CRediT roles may not fit well, Khanna et al. (2022) found that 15% of the journals publishing in languages other than English predominantly published in biomedical and health sciences (the discipline from which CRediT originated), and 12% of journals published predominantly engineering and technological studies. Another study found that non-English scholarship provides crucial information for the ecology of species in some places, including in high-biodiversity countries (Amano et al., 2021).
Motivation for Standardized Translations
In the current age of machine translation, the need to manually translate standards from English is not obvious. A service like Google Translate or DeepL Translator can provide a passable translation. It can result, however, in loss of nuance or mis-translation of some meaning associated with the scientific context and intention of the CRediT standard. Thus, accuracy of translation is one of our purposes in involving humans in the creation of a translation.
A second motivation is to have a standard translation be publicly available in each language. In the same way that there is only one English version of CRediT, a single version in other languages can prevent confusion and facilitate agreement on what each CRediT role means and how they should be used. Having a standardized translation in a language can also increase confidence around the use of CRediT among publishers, editors, and researchers of the associated community. Finally, the existence of such a translation may trigger discussion in the corresponding language community about the possibility of adopting CRediT, or an alternative way of indicating what each contributor to a research output did.
CRediT is finding new uses, and our translations should facilitate those uses in non-English language communities. For example, researchers have begun generating CVs that aggregate the types of contributions a researcher makes across their research outputs (Schönbrodt et al., 2025). Another use of CRediT information is in metascience, where CRediT has been used to examine patterns of scientific work, e.g. for different author positions and different genders (Larivière et al., 2021). Assessing such patterns in different languages will be facilitated by having standard and high-quality translations.

Figure 1. The logo for the CRediT translation project (others are welcome to use it without attribution, CC-ZERO, although we appreciate citations).
History and Organization of the Translation Project
The translation project emerged from a broader community called the “Contributorship Collaboration” (https://contributorshipcollaboration.github.io) that was created to advance projects associated with contributorship.
Our translation project launched in October 2023, when some of us organized a hackathon at the online Big Team Science conference entitled “Translating CRediT for increased accessibility and adoption” (Holcombe et al., 2023). During the session, some in attendance started working on a few translations for languages they were fluent in, and some volunteered to help recruit speakers of additional languages. Announcements on social media and hackathon member contacts attracted additional participants. A website was created with details about the project alongside a code of conduct to promote inclusion and participation (https://contributorshipcollaboration.github.io/projects/translation/).
In 2024, the leaders of the present translation project (Holcombe, Kovacs, and Lagisz) scheduled video calls approximately monthly where the project and procedures were explained to newcomers, and participants worked on translations, the associated website, and outreach. A “translations in progress” table was maintained together with a list of native speakers who expressed interest through a Google form, social media, or email. The project webpage (https://contributorshipcollaboration.github.io/projects/translation/) hosts contribution instructions and other project materials.
Our completed translations of CRediT are posted at https://contributorshipcollaboration.github.io/projects/translation/completed/. The project team would like to continue to facilitate translation of any language with an interested scientific community, and potential new contributors are invited to express interest at https://forms.gle/UVa6bahYbF6SKTDM9 or by emailing one of the project leads.
Our Translation Process
The CRediT standard consists of the names of its fourteen contribution categories, and their descriptions, typically a few dozen words each. An additional part of the CRediT that is marked optional in the standard is the ability to indicate the degree of contribution for each project contributor, from the set ‘lead,’ ‘equal,’ and ‘supporting’. We initially did not plan to translate this aspect, so as yet we have few translations of it, with more coming.
A diagram representing key steps of our translation process is shown in Figure 2. The process involved at least two fluent speakers of each non-English language. Another requirement was the involvement of someone with experience in academic research, ideally in science, so that they have some context for understanding the background to CRediT. After an initial translation was created, its quality was checked by a second person proficient in the language. This typically led to refinements. In many cases, the second person first took the draft translation and translated it into English, whereupon this English “back translation” was then compared to the original English. As discrepancies suggested the possibility of a problem with the translation, the discrepancies were scrutinized by one or both translators, and sometimes a revision to the translation was made.

Figure 2. A schematic of our workflow for translating CRediT. The green font highlights the manual translation and validation steps.
As an example, during the translation process for the Dutch version of the CRediT roles, a researcher familiar with CRediT and fluent in Dutch (native speaker) translated the fourteen CRediT roles and their associated descriptions from English into Dutch. Next, a second translator — in this case, not a researcher and unfamiliar with CRediT but fluent in Dutch and English — conducted a back-translation of the Dutch version into English. Discrepancies between the back-translation and the original English text were discussed by both translators to refine the final translation.
After each translation was finalized, its text was entered into a structured JSON file. Once this file was placed in our Github repository, Python code (written largely by Marton Kovacs) generated the human-readable webpage (Figure 4). This webpage includes both the translation content and additional information such as the procedure used for the associated translation, the human translators, and the license.

Figure 3. Visualization of the global coverage of our translations. The lighter green shows countries with an official language that we have a translation for. The darker green shows countries with English as an official language, and thus covered to some extent, where if the country has an additional official language, we do not have a translation of it. Image created in R with the rnaturalearth (Massicotte & South, 2023) and sf (Pebesma & Bivand, 2023) libraries.
Figure 3 illustrates the global coverage of our translations. While the coverage is in some ways impressive, it is important to note that many countries use multiple languages, many of which we do not have translations for (the status of each cannot easily be depicted by this type of map).
Table 1. The 36 languages and variants (a few are the same language written in different scripts) for which we currently have completed translations. For each language, the translation of “Conceptualization” is provided in the second column as an example. The third column lists translators for each language, including people who helped to validate initial translations. An updated list of completed and publicly shared translations is maintained at https://contributorshipcollaboration.github.io/projects/translation/completed/
| Language |
Translation of “Conceptualization” |
Translators |
| Amharic |
ጽንሰ-ሀሳብ |
Befkadu Mewded, Mengesha Asefa |
| Arabic |
ﺗﺼﻮر |
Heba Abd El Aziz Moussa Abd Alla, Omayma Missawi |
| Bengali |
Cমৗিলক িচVার কাঠােমা Zতির |
Arobindu Dash, Garga Chatterjee |
| Catalan |
Conceptualizació |
Jordi Lacruz Casado, Marc Roger Bria Ramirez |
| Chinese (Simplified) |
概念化 |
Wawa (Keren) Yu, Yefeng Yang |
| Chinese (Traditional) |
概念化 |
Wawa (Keren) Yu, Yefeng Yang |
| Croatian |
Konceptualizacija |
Antica Čulina, Anita Tarandek |
| Czech |
Konceptualizace |
Nina Trubanová, Radana Chytilová |
| Danish |
Konceptualisering |
Rasmus Pedersen, Rasmus Overmark |
| Dutch |
Conceptualisatie |
Eli Thoré, Ruben Thoré |
| Farsi |
ﻣﻔﮭﻮم ﺳﺎزی |
Saeed Shafiei Sabet, Marziyeh Amini Fard |
| Finnish |
Tutkimuksen käsitteellinen muotoilu |
Antti-Jussi Nygård, Jussi Lehtonen, Heikki Lehtonen |
| French |
Conceptualisation |
David Vaidis, Amélie Gourdon-Kanhukamwe |
| German |
Konzeptualisierung |
Timo Lüke, Caro Jansky, Helena Hartmann |
| Greek |
Εννοιολόγηση |
Elina Takola, Stavroula Litsiou, Georgia Daraki |
| Hindi |
अवधारणा |
Manisha Sinha, Swastika Issar |
| Hungarian |
Konceptualizáció / Ötletelés |
Marton Kovacs, Marton A. Varga |
| Indonesian |
Konseptualisasi |
Dasapta Erwin Irawan, Stevanus Nalendra Jati, Arif Gunawan |
| Italian |
Ideazione |
Lorenzo Ricolfi, Vittoria Porta |
| Japanese |
研究の立案 |
Ayumi Mizuno, Sawaka Oka |
| Korean |
개념화 |
Jin-Won Lee, Hye-Kyoung Moon, Dongjin Kim |
| Lithuanian |
Konceptualizacija |
Ineta Kačergytė, Barbora Drąsutytė-Vaičiukynė |
| Malayalam |
ആശയവൽക്കരണം |
Aswathi Surendran, Varada Sadanandan Potty |
| Norwegian – Bokmål |
Idéutforming |
Bjørn Sætrevik, Ulvhild Helena Tormodsdatter Færøvik |
| Norwegian – Nynorsk |
Idéutforming |
Bjørn Sætrevik, Ulvhild Helena Tormodsdatter Færøvik |
| Polish |
Konceptualizacja |
Malgorzata Lagisz, Marta Kowal, Jakub Krasucki |
| Portuguese |
Conceitualização |
Pietro Pollo, Erika Marques Santana |
| Romanian |
Conceptualizare |
Octavia-Luciana Madge, Elena Popescu, Grace Madge |
| Russian |
Разработка концепции |
Dmitry Kochetkov, Natalia Popova, Irina Kochetkova |
| Serbian (Cyrillic) |
Концептуализација |
Milica Pavlović, Milica Sevkušić, Anja Bošnjak, Dunja Mićunović |
| Serbian (Latin) |
Konceptualizacija |
Milica Pavlović, Milica Sevkušić, Anja Bošnjak, Dunja Mićunović |
| Slovak |
Konceptualizácia |
Nina Trubanová, Marek Vranka, Viktória Šinkorová, Zuzana Irsova |
| Spanish |
Conceptualización |
Riva Quiroga, Jordi Lacruz Casado, Marc Roger Bria Ramirez |
| Swahili |
Udhanifu |
Boniface Maenge Munyao, Harriet Melany Nyamvula, Jemimah Mutisya Kavinya |
| Swedish |
Konceptualisering |
Jonas Knape, Hedvig Nenzén |
| Turkish |
Kavramsallaştırma |
Murat Tahtali, Ugur Turhan |
| Ukrainian |
Kонцептуалізація |
Avhustyn Terpeliuk and his English teacher from Ukraine |
| Vietnamese |
Hình thành ý tưởng |
Manh-Toan Ho, Nguyễn Đức Hùng |
As of 9 September 2025, we have completed, and provided publicly via our website, translations for thirty-six different languages, including some language or script variants. Regarding variants, in the case of Chinese, we have created separate translations for traditional and simplified Chinese and for Serbian, we have created both Latin script and Cyrillic script versions (each script is used by certain Serbian journals).

Figure 4. A portion of the human-readable webpage of our Dutch translation of CRediT. The page comprises the translation, the original English, the names of the translators (with hyperlinks to their ORCIDs, when available), a link to a machine-readable version of the translation encoded in JSON format, the usage license for the translation, and a description of how the translation was created.
Suggestions for improvements to the translations are welcome, and can be made by opening an issue on the GitHub page (https://github.com/contributorshipcollaboration/credit-translation/issues) or contacting one of the project leads. Our JSON file includes a version number, which complements the ability to see in the Github file history a history of any changes (which so far have been minimal).
Machine-readable Translation for Publishing Services
The human-readable translations posted on our website are accompanied by their machine-readable versions, structured JSON files. The JSON files reside in our GitHub translation repository (https://github.com/contributorshipcollaboration/credit-translation). The final design of our JSON schema reflects advice from the Open Journal Systems community (Smecher, 2024).
For the English version of CRediT, publishers can use the machine-readable list of CRediT roles (https://github.com/JATS4R/jats-schematrons/blob/master/schematrons/1.0/credit-roles.xml) created by the NISO working group JATS 4 Reuse (JATS4R; https://jats4r.niso.org/). A limitation, however, is that JATS4R’s service (its Schematron) provides the title of the CRediT roles but not the descriptions. Our schema does include the text descriptions, which publishers can utilize for their software in English, as for other languages. See the repository for the English ‘translation’ (https://github.com/contributorshipcollaboration/credit-translation/blob/main/translations/en.json).
As an example of how our translations are used, consider Open Journal Systems. As mentioned above, OJS serves many language communities, and OJS developers have created a plug-in to implement CRediT, which utilizes our translations (https://github.com/pkp/credit). In a future version, OJS plans to integrate CRediT into its core functionality, using our translations.
In addition to the CRediT content, our JSON schema also includes fields for the names of the translators, their ORCID identifiers (if they have one), and a free-text description of the translation process. As with listing authors on scholarly papers, including the names of the creators serves multiple purposes. One purpose is to give appropriate credit for the translation. Another is to comply with copyright law – with the original CRediT text being licensed CC-BY, the attribution for the translation (“adapted material” in Creative Commons terminology) should include the original license holders as well as the translators. A third reason for including the creators’ names is accountability, which may support the credibility and the quality of the translation.
Promotion of CRediT, and of Contributorship More Generally
In the context of our Contributorship Collaboration, creation of the translations is just one element of a broader effort to facilitate improvements in crediting researchers for their specific contributions.

Figure 5. Completed and in-progress CRediT translations into languages other than English by Contribution Collaboration and the ways we promote them. The bricks at the bottom represent key groups of players in multilingual publishing. The numbers in grey circles provide an indication of how many entities we have contacted as of September 2025.
Once a translation is completed, we attempt to contact publishers and research organizations associated with that language community (see Figure 5). For several languages, we have contacted associated academies of science, journals, and publishers to make them aware of the translation into their language. In this outreach, we also try to advocate for the contributorship model in general, as we realise that the recipients may not be aware of or motivated to consider this as an addition to an authorship model.
When journal editors, scientific societies, and publishers already have some awareness of contributorship, even if only a dim awareness, we think our outreach is most likely to be effective. Our outreach typically consists of “cold” ( no pre-existing connection) emails, to which the response rate is low. We do receive some responses, and in many cases, it seems clear that the contact was not previously aware of the option and importance of reporting contributorship information. Fortunately, some of these responses have been positive. We have replied to these and are optimistic that some will result in changes to policies, possibly enhancing the crediting of researchers who publish in certain venues.
Sociotechnical Aspects of Adoption
Historically, the largest increase in adoption of CRediT seems to have been driven by the decision by large publishers to integrate CRediT into their systems. Integration into their systems involves both production of JATS-XML metadata with CRediT information, and also changing their manuscript submission systems, adding tick boxes or similar for authors to indicate which of the 14 categories each coauthor contributed to.
The Public Library of Science (PLOS) appears to have been the first major publisher to implement CRediT, from 2016 asking authors at their journals to indicate which CRediT categories each of them contributed to, while simultaneously rolling out CRediT metadata publication (Atkins, 2016). In a similar fashion, in 2023 Elsevier implemented CRediT for hundreds of journals (Genova, 2023). The groundwork for such adoption was laid by the involvement of researchers in creating and advocating for CRediT (Brand et al., 2015), but large-scale usage is greatly facilitated by publishers or publishing service providers (such as journal management systems) changing their systems.
With Open Journal Systems likely being the most popular journal publishing system for non-English scholarly communication, the biggest increase in CRediT adoption among non-English communities may ultimately depend on the pace of integration of CRediT with OJS. However, it is also possible to adopt CRediT, albeit in a limited fashion, without any technical changes to publishing services. As mentioned in the “History and Implementation of CRediT in English”, authors can simply write about each author’s contributions in the Author Note or Acknowledgments sections of an article using the CRediT taxonomy.
Facilitating the use of CRediT at journals with digital systems that have not yet implemented CRediT is one purpose of the tenzing web app available at https://tenzing.club. Research teams using tenzing fill out a spreadsheet indicating which of the 14 categories each contributor participated in. Tenzing can then provide output in a form of text that manuscript authors can include in their manuscripts (Holcombe et al., 2020; Kovacs et al., 2021). Tenzing also provides the JATS-XML metadata, although we do not know of any journal whose submission system allows authors to provide the metadata themselves. However, Aries Systems Corporation, the provider of the Editorial Manager journal management system, sells an additional software product called Ingest to which one can upload the relevant JATS-XML metadata, which is then put into the Editorial Manager site for a journal. Example users of Ingest are described as language editing services, collaborative authoring tools, and publisher branded portals (Aries Systems, 2019).
Discussion and Future Directions
To translate the CRediT standard, several dozen of us worked together across disparate language communities. As a result, many of us are now better informed about CRediT and contributorship, as are many more multilingual contacts who we have been in touch with. This multilingual community may in future be consulted about possible revisions to the CRediT standard or related initiatives that could further improve global participation in science.
While CRediT is currently only used for the co-authors of an article, its creators did not intend for its use to be restricted to authors. Expanding its use could garner more recognition for people who traditionally are only mentioned in the Acknowledgments section. Mentions in an Acknowledgment section do not include machine-readable metadata and typically do not include a unique identifier for tallying contributions across papers.
Depending on the subfield, people mentioned only in the Acknowledgments may include specialists such as technicians, library or information professionals, and project managers. Additional persons may include undergraduate student contributors and research assistants. Currently, journals follow a somewhat arbitrary practice of restricting the contribution statement to the authors of the associated journal article. Unfortunately, the criteria for authorship used by many journals, such as the hundreds of journals that use International Committee of Medical Journal Editors (ICMJE) guidelines, are restrictive, leaving many significant research contributors to be mentioned only in the Acknowledgments or not at all (Holcombe, 2019).
As mentioned in the Introduction, two limitations of CRediT are that the roles it provides are too limited for some purposes, and too general for others. The Contributor Roles Ontology (CRO) is an extension of CRediT with over 50 terms that provide more specific contribution types within the 14 CRediT contributions (Vasilevsky et al., 2020; White et al., 2019). CRO has not yet been adopted by many journals or other research venues, but we suggest that more should consider it, especially because it so compatible with the CRediT standard that is now used so widely.
Many research projects include resources that go beyond the facets that CRediT, or even CRO, provide roles for. For software packages, for example, R uses syntax and roles developed by MARC (MAchine-Readable Cataloging) to allow specification of roles including that of the compiler, the creator, the thesis advisor, and the translator (Coyle, 2011; Hornik et al., 2012). Other important entities associated with research projects include study preregistrations, protocols, and datasets. For datasets, the DataCite Metadata Schema is used for millions of datasets (Brase, 2009; Ninkov et al., 2021). DataCite provides information about who contributed to the creation of a dataset and a range of other information. DataCite does not appear to have been translated yet, and translating it may be worthwhile.
More generally, we hope that efforts like ours, if applied to other protocols for providing information about scientific projects, will further facilitate the information available about, and global integration of, the work of non-English scholarly communities.
Acknowledgments
We thank Alec Smecher (ORCID 0000-0002-4905-4662) for suggestions and code relevant to our translation JSON schema (CRediT category: Software).
CRediT Contributorship Statement
Contributing to a translation was considered sufficient to receive the Investigation role from the CRediT taxonomy. Initials of relevant co-authors are used in the following text, except for those with duplicate initials. “Data curation” refers to involvement in creating the machine-readable JSON files.
Conceptualization: A.O.H., M.L., T.L., and M. Kovacs. Data curation: A.O.H., M.L., P.P., D. Kochetkov, R.P., W.K.Y., D.C.V., M.A.V., M. Kowal, and M. Kovacs. Investigation: A.O.H., M.L., B.S., E.S.J.T., P.P., D. Kochetkov, R.P., D.M., B.M., S.S.S., U.H.F., W.K.Y., D.C.V., E.M.S., N.T., M.A.V., A.S., A.Č., C.O.A., R.C., A.M., S.L., Y.Y., M.-T.H., E.T., O.-L.M., I. Kačergytė, T.L., D.E.I., R.O., O.M., A.B., M.A.F., M.D.P., M. Kowal, M.Š., E.P., V.Š., M.V., H.H., A.T., Z.I., B.M.M., I. Kochetkova, M.A., H.M.N., J.M.K., H.K.N., J. Knape, G.D., M.T., A.D., H.A.M.A., J.L., H.L., R.Q., D. Kim, J. Krasucki, A.G.-K., B.D.-V., M.R.B.R., J.L.C., J.-W.L., H.-K.M., M.S., G.C., S.N.J., S.O., S.I., L.R., V.P., U.T., M. Kovacs, and N.P. Project administration: A.O.H., M.L., and M. Kovacs. Software: M. Kovacs. Visualization: A.O.H., M.L., and P.P. Writing – original draft: A.O.H., M.L., E.S.J.T., D. Kochetkov, and H.A.M.A. Writing – review & editing: A.O.H., M.L., B.S., E.S.J.T., P.P., D. Kochetkov, R.P., D.M., B.M., S.S.S., W.K.Y., D.C.V., E.M.S., N.T., A.S., A.Č., C.O.A., R.C., A.M., Y.Y., M.-T.H., E.T., O.-L.M., I. Kačergytė, A.B., M.A.F., M.D.P., M. Kowal, V.Š., M.V., H.H., A.T., Z.I., B.M.M., I. Kochetkova, M.A., H.M.N., J.M.K., A.D., H.A.M.A., D. Kim, A.G.-K., M.R.B.R., J.L.C., J.-W.L., H.-K.M., L.R., V.P., U.T., and M. Kovacs
Disclosure Statement
The authors report there are no competing interests to declare.
Funding details
Antica Čulina was supported by the Croatian Science Foundation, project number HRZZ-IP-2022-10-2872; Erika M. Santana was supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP, grant number 2024/12435-6) as part of the São Paulo Excellence Chair Project (SPEC) entitled ‘Innovations in Human and Non-Human Communities’ (FAPESP grant number 2018/18900-1); Swastika Issar was supported by the Tata Institute of Fundamental Research.
References
Allen, L., O’Connell, A., & Kiermer, V. (2019). How can we ensure visibility and diversity in research contributions? How the Contributor Role Taxonomy (CRediT) is helping the shift from authorship to contributorship. Learned Publishing, 32(1), 71–74. https://doi.org/10.1002/leap.1210
Amano, T., Berdejo-Espinola, V., Christie, A. P., Willott, K., Akasaka, M., Báldi, A., Berthinussen, A., Bertolino, S., Bladon, A. J., Chen, M., Choi, C.-Y., Kharrat, M. B. D., Oliveira, L. G. de, Farhat, P., Golivets, M., Aranzamendi, N. H., Jantke, K., Kajzer-Bonk, J., Aytekin, M. Ç. K., … Sutherland, W. J. (2021). Tapping into non-English-language science for the conservation of global biodiversity. PLOS Biology, 19(10), e3001296. https://doi.org/10.1371/journal.pbio.3001296
Aries Systems. (2019). More Submission Options for Authors with Ingest Service. Aries Systems. https://www.ariessys.com/blog/submission-options-authors-ingest-service/
Atkins, H. (2016, July 8). Author Credit: PLOS and CRediT Update. The Official PLOS Blog. https://theplosblog.plos.org/2016/07/author-credit-plos-and-credit-update/
Beck, J. (2011). NISO Z39.96 The Journal Article Tag Suite (JATS): What Happened to the NLM DTDs? The Journal of Electronic Publishing, 14(1). https://doi.org/10.3998/3336451.0014.106
Brand, A., Allen, L., Altman, M., Hlava, M., & Scott, J. (2015). Beyond authorship: Attribution, contribution, collaboration, and credit. Learned Publishing, 28(2), 151–155. https://doi.org/10.1087/20150211
Brase, J. (2009). DataCite-A global registration agency for research data. 2009 Fourth International Conference on Cooperation and Promotion of Information Resources in Science and Technology, 257–261. https://ieeexplore.ieee.org/abstract/document/5361881/
Brunskill, A., & Hanneke, R. (2022). The case of the disappearing librarians: Analyzing documentation of librarians’ contributions to systematic reviews. Journal of the Medical Library Association, 110(4), Article 4. https://doi.org/10.5195/jmla.2022.1505
Chowdhury, S., Gonzalez, K., Aytekin, M. Ç. K., Baek, S., Bełcik, M., Bertolino, S., Duijns, S., Han, Y., Jantke, K., Katayose, R., Lin, M., Nourani, E., Ramos, D. L., Rouyer, M., Sidemo‐Holm, W., Vozykova, S., Zamora‐Gutierrez, V., & Amano, T. (2022). Growth of non‐English‐language literature on biodiversity conservation. Conservation Biology, 36(4), e13883. https://doi.org/10.1111/cobi.13883
Coyle, K. (2011). MARC21 as Data: A Start. The Code4Lib Journal, 14. https://journal.code4lib.org/articles/5468
Genova, Z. (2023, May 16). CRediT where credit’s due. Www.Elsevier.Com. https://www.elsevier.com/en-gb/connect/credit-where-credits-due
Holcombe, A. O. (2019). Contributorship, Not Authorship: Use CRediT to Indicate Who Did What. Publications, 7(3), Article 3. https://doi.org/10.3390/publications7030048
Holcombe, A. O., Kovacs, M., Aust, F., & Aczel, B. (2020). Documenting contributions to scholarly articles using CRediT and tenzing. PLOS ONE, 15(12), e0244611. https://doi.org/10.1371/journal.pone.0244611
Holcombe, A. O., Lüke, T., & Kovacs, M. (2023). Translating CRediT for increased accessibility and adoption. Big Team Science.
Hornik, K., Murdoch, D., & Zeileis, A. (2012). Who Did What? The Roles of R Package Authors and How to Refer to Them. The R Journal, 4(1), 64–69. https://rjournal.github.io/
International Committee of Medical Journal Editors. (2013). The New ICMJE Recommendations (August 2013). https://www.icmje.org/news-and-editorials/new_rec_aug2013.html
Khanna, S., Ball, J., Alperin, J. P., & Willinsky, J. (2022). Recalibrating the scope of scholarly publishing: A modest step in a vast decolonization process. Quantitative Science Studies, 3(4), 912–930. https://doi.org/10.1162/qss_a_00228
Kovacs, M., Holcombe, A., Aust, F., & Aczel, B. (2021). Tenzing and the importance of tool development for research efficiency. Information Services & Use, 41(1–2), 123–130. https://doi.org/10.3233/ISU-210109
Larivière, V., Pontille, D., & Sugimoto, C. R. (2021). Investigating the division of scientific labor using the Contributor Roles Taxonomy (CRediT). Quantitative Science Studies, 2(1), 111–128. https://doi.org/10.1162/qss_a_00097
Massicotte, P., & South, A. (2023). rnaturalearth: World map data from natural earth [Manual]. https://doi.org/10.32614/CRAN.package.rnaturalearth
McGonagle-O’Connell, A. (2022, June 21). American Chemical Society adopts CRediT. CRediT. https://credit.niso.org/publisher-adopters/adoption-milestone-acs-adds-credit/
Meta-psychology editors. (n.d.). Submissions. Meta-Psychology Journal. Retrieved June 5, 2025, from https://open.lnu.se/index.php/metapsychology/about/submissions
Ninkov, A., Gregory, K., Peters, I., & Haustein, S. (2021). Datasets on DataCite—An initial bibliometric investigation. Proceeding of the 18th International Conference on Scientometrics & Informetrics (ISSI 2021), 1–6. https://pdfs.semanticscholar.org/0c75/250dc0230cfc699f9732e7a94e4da8e9ff94.pdf
NISO CRediT Working Group. (2022). ANSI/NISO Z39.104-2022, CRediT, Contributor Roles Taxonomy. NISO. https://doi.org/10.3789/ansi.niso.z39.104-2022
Pebesma, E., & Bivand, R. (2023). Spatial Data Science: With Applications in R (1st ed.). Chapman and Hall/CRC. https://doi.org/10.1201/9780429459016
Rennie, D. (1997). When Authorship Fails: A Proposal to Make Contributors Accountable. JAMA, 278(7), 579. https://doi.org/10.1001/jama.1997.03550070071041
Schönbrodt, F., Gärtner, A., Frank, M., Gollwitzer, M., Ihle, M., Mischkowski, D., Phan, L. V., Schmitt, M., Scheel, A., Schubert, A.-L., Steinberg, U., & Leising, D. (2025). Responsible Research Assessment I: Implementing DORA and CoARA for hiring and promotion in psychology. OSF. https://doi.org/10.31234/osf.io/rgh5b_v2
Smecher, A. (2024, June 12). Convert translations · Pull Request #22 · contributorshipcollaboration/credit-translation. GitHub. https://github.com/contributorshipcollaboration/credit-translation/pull/22
Turba, R., Thoré, E. S. J., Bertram, M. G., Bridg, H., Sabet, S. S., Gamboa, M., Ríos-Orjuela, J. C., Takola, E., Capa Salinas, J., Sampaio Franco, A. C., & Marín, C. (2025). Global North-South science inequalities due to language and funding barriers. Zenodo. https://doi.org/10.5281/zenodo.14902147
Valenzuela-Toro, A. M., & Viglino, M. (2021). How Latin American researchers suffer in science. Nature, 598(7880), 374–375. https://doi.org/10.1038/d41586-021-02601-8
Vasilevsky, N. A., Hosseini, M., Teplitzky, S., Ilik, V., Mohammadi, E., Schneider, J., Kern, B., Colomb, J., Edmunds, S. C., Gutzman, K., Himmelstein, D. S., White, M., Smith, B., O’Keefe, L., Haendel, M., & Holmes, K. L. (2020). Is authorship sufficient for today’s collaborative research? A call for contributor roles. Accountability in Research, 1–21. https://doi.org/10.1080/08989621.2020.1779591
White, M., Vasilevsky, N., & Haendel, M. (2019). data2health/contributor-role-ontology: 2019-02-22 Initial release [Computer software]. Zenodo. https://doi.org/10.5281/zenodo.2593739