Cargando…

A large-scale study on research code quality and execution

This article presents a study on the quality and execution of research code from publicly-available replication datasets at the Harvard Dataverse repository. Research code is typically created by a group of scientists and published together with academic papers to facilitate research transparency an...

Descripción completa

Detalles Bibliográficos
Autores principales: Trisovic, Ana, Lau, Matthew K., Pasquier, Thomas, Crosas, Mercè
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8861064/
https://www.ncbi.nlm.nih.gov/pubmed/35190569
http://dx.doi.org/10.1038/s41597-022-01143-6
_version_ 1784654805127397376
author Trisovic, Ana
Lau, Matthew K.
Pasquier, Thomas
Crosas, Mercè
author_facet Trisovic, Ana
Lau, Matthew K.
Pasquier, Thomas
Crosas, Mercè
author_sort Trisovic, Ana
collection PubMed
description This article presents a study on the quality and execution of research code from publicly-available replication datasets at the Harvard Dataverse repository. Research code is typically created by a group of scientists and published together with academic papers to facilitate research transparency and reproducibility. For this study, we define ten questions to address aspects impacting research reproducibility and reuse. First, we retrieve and analyze more than 2000 replication datasets with over 9000 unique R files published from 2010 to 2020. Second, we execute the code in a clean runtime environment to assess its ease of reuse. Common coding errors were identified, and some of them were solved with automatic code cleaning to aid code execution. We find that 74% of R files failed to complete without error in the initial execution, while 56% failed when code cleaning was applied, showing that many errors can be prevented with good coding practices. We also analyze the replication datasets from journals’ collections and discuss the impact of the journal policy strictness on the code re-execution rate. Finally, based on our results, we propose a set of recommendations for code dissemination aimed at researchers, journals, and repositories.
format Online
Article
Text
id pubmed-8861064
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-88610642022-03-15 A large-scale study on research code quality and execution Trisovic, Ana Lau, Matthew K. Pasquier, Thomas Crosas, Mercè Sci Data Analysis This article presents a study on the quality and execution of research code from publicly-available replication datasets at the Harvard Dataverse repository. Research code is typically created by a group of scientists and published together with academic papers to facilitate research transparency and reproducibility. For this study, we define ten questions to address aspects impacting research reproducibility and reuse. First, we retrieve and analyze more than 2000 replication datasets with over 9000 unique R files published from 2010 to 2020. Second, we execute the code in a clean runtime environment to assess its ease of reuse. Common coding errors were identified, and some of them were solved with automatic code cleaning to aid code execution. We find that 74% of R files failed to complete without error in the initial execution, while 56% failed when code cleaning was applied, showing that many errors can be prevented with good coding practices. We also analyze the replication datasets from journals’ collections and discuss the impact of the journal policy strictness on the code re-execution rate. Finally, based on our results, we propose a set of recommendations for code dissemination aimed at researchers, journals, and repositories. Nature Publishing Group UK 2022-02-21 /pmc/articles/PMC8861064/ /pubmed/35190569 http://dx.doi.org/10.1038/s41597-022-01143-6 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Analysis
Trisovic, Ana
Lau, Matthew K.
Pasquier, Thomas
Crosas, Mercè
A large-scale study on research code quality and execution
title A large-scale study on research code quality and execution
title_full A large-scale study on research code quality and execution
title_fullStr A large-scale study on research code quality and execution
title_full_unstemmed A large-scale study on research code quality and execution
title_short A large-scale study on research code quality and execution
title_sort large-scale study on research code quality and execution
topic Analysis
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8861064/
https://www.ncbi.nlm.nih.gov/pubmed/35190569
http://dx.doi.org/10.1038/s41597-022-01143-6
work_keys_str_mv AT trisovicana alargescalestudyonresearchcodequalityandexecution
AT laumatthewk alargescalestudyonresearchcodequalityandexecution
AT pasquierthomas alargescalestudyonresearchcodequalityandexecution
AT crosasmerce alargescalestudyonresearchcodequalityandexecution
AT trisovicana largescalestudyonresearchcodequalityandexecution
AT laumatthewk largescalestudyonresearchcodequalityandexecution
AT pasquierthomas largescalestudyonresearchcodequalityandexecution
AT crosasmerce largescalestudyonresearchcodequalityandexecution