Cargando…

Validating the knowledge bank approach for personalized prediction of survival in acute myeloid leukemia: a reproducibility study

Reproducibility is not only essential for the integrity of scientific research but is also a prerequisite for model validation and refinement for the future application of predictive algorithms. However, reproducible research is becoming increasingly challenging, particularly in high-dimensional gen...

Descripción completa

Detalles Bibliográficos
Autores principales: Xu, Yujun, Mansmann, Ulrich
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Berlin Heidelberg 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9360099/
https://www.ncbi.nlm.nih.gov/pubmed/35429300
http://dx.doi.org/10.1007/s00439-022-02455-8
_version_ 1784764279176560640
author Xu, Yujun
Mansmann, Ulrich
author_facet Xu, Yujun
Mansmann, Ulrich
author_sort Xu, Yujun
collection PubMed
description Reproducibility is not only essential for the integrity of scientific research but is also a prerequisite for model validation and refinement for the future application of predictive algorithms. However, reproducible research is becoming increasingly challenging, particularly in high-dimensional genomic data analyses with complex statistical or algorithmic techniques. Given that there are no mandatory requirements in most biomedical and statistical journals to provide the original data, analytical source code, or other relevant materials for publication, accessibility to these supplements naturally suggests a greater credibility of the published work. In this study, we performed a reproducibility assessment of the notable paper by Gerstung et al. (Nat Genet 49:332–340, 2017) by rerunning the analysis using their original code and data, which are publicly accessible. Despite an open science setting, it was challenging to reproduce the entire research project; reasons included: incomplete data and documentation, suboptimal code readability, coding errors, limited portability of intensive computing performed on a specific platform, and an R computing environment that could no longer be re-established. We learn that the availability of code and data does not guarantee transparency and reproducibility of a study; paradoxically, the source code is still liable to error and obsolescence, essentially due to methodological and computational complexity, a lack of reproducibility checking at submission, and updates for software and operating environment. The complex code may also hide problematic methodological aspects of the proposed research. Building on the experience gained, we discuss the best programming and software engineering practices that could have been employed to improve reproducibility, and propose practical criteria for the conduct and reporting of reproducibility studies for future researchers. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00439-022-02455-8.
format Online
Article
Text
id pubmed-9360099
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer Berlin Heidelberg
record_format MEDLINE/PubMed
spelling pubmed-93600992022-08-10 Validating the knowledge bank approach for personalized prediction of survival in acute myeloid leukemia: a reproducibility study Xu, Yujun Mansmann, Ulrich Hum Genet Original Investigation Reproducibility is not only essential for the integrity of scientific research but is also a prerequisite for model validation and refinement for the future application of predictive algorithms. However, reproducible research is becoming increasingly challenging, particularly in high-dimensional genomic data analyses with complex statistical or algorithmic techniques. Given that there are no mandatory requirements in most biomedical and statistical journals to provide the original data, analytical source code, or other relevant materials for publication, accessibility to these supplements naturally suggests a greater credibility of the published work. In this study, we performed a reproducibility assessment of the notable paper by Gerstung et al. (Nat Genet 49:332–340, 2017) by rerunning the analysis using their original code and data, which are publicly accessible. Despite an open science setting, it was challenging to reproduce the entire research project; reasons included: incomplete data and documentation, suboptimal code readability, coding errors, limited portability of intensive computing performed on a specific platform, and an R computing environment that could no longer be re-established. We learn that the availability of code and data does not guarantee transparency and reproducibility of a study; paradoxically, the source code is still liable to error and obsolescence, essentially due to methodological and computational complexity, a lack of reproducibility checking at submission, and updates for software and operating environment. The complex code may also hide problematic methodological aspects of the proposed research. Building on the experience gained, we discuss the best programming and software engineering practices that could have been employed to improve reproducibility, and propose practical criteria for the conduct and reporting of reproducibility studies for future researchers. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00439-022-02455-8. Springer Berlin Heidelberg 2022-04-16 2022 /pmc/articles/PMC9360099/ /pubmed/35429300 http://dx.doi.org/10.1007/s00439-022-02455-8 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Original Investigation
Xu, Yujun
Mansmann, Ulrich
Validating the knowledge bank approach for personalized prediction of survival in acute myeloid leukemia: a reproducibility study
title Validating the knowledge bank approach for personalized prediction of survival in acute myeloid leukemia: a reproducibility study
title_full Validating the knowledge bank approach for personalized prediction of survival in acute myeloid leukemia: a reproducibility study
title_fullStr Validating the knowledge bank approach for personalized prediction of survival in acute myeloid leukemia: a reproducibility study
title_full_unstemmed Validating the knowledge bank approach for personalized prediction of survival in acute myeloid leukemia: a reproducibility study
title_short Validating the knowledge bank approach for personalized prediction of survival in acute myeloid leukemia: a reproducibility study
title_sort validating the knowledge bank approach for personalized prediction of survival in acute myeloid leukemia: a reproducibility study
topic Original Investigation
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9360099/
https://www.ncbi.nlm.nih.gov/pubmed/35429300
http://dx.doi.org/10.1007/s00439-022-02455-8
work_keys_str_mv AT xuyujun validatingtheknowledgebankapproachforpersonalizedpredictionofsurvivalinacutemyeloidleukemiaareproducibilitystudy
AT mansmannulrich validatingtheknowledgebankapproachforpersonalizedpredictionofsurvivalinacutemyeloidleukemiaareproducibilitystudy