Cargando…

Assessment of Adherence to Reporting Guidelines by Commonly Used Clinical Prediction Models From a Single Vendor: A Systematic Review

IMPORTANCE: Various model reporting guidelines have been proposed to ensure clinical prediction models are reliable and fair. However, no consensus exists about which model details are essential to report, and commonalities and differences among reporting guidelines have not been characterized. Furt...

Descripción completa

Detalles Bibliográficos
Autores principales: Lu, Jonathan H., Callahan, Alison, Patel, Birju S., Morse, Keith E., Dash, Dev, Pfeffer, Michael A., Shah, Nigam H.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: American Medical Association 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9391954/
https://www.ncbi.nlm.nih.gov/pubmed/35984654
http://dx.doi.org/10.1001/jamanetworkopen.2022.27779
_version_ 1784770967558422528
author Lu, Jonathan H.
Callahan, Alison
Patel, Birju S.
Morse, Keith E.
Dash, Dev
Pfeffer, Michael A.
Shah, Nigam H.
author_facet Lu, Jonathan H.
Callahan, Alison
Patel, Birju S.
Morse, Keith E.
Dash, Dev
Pfeffer, Michael A.
Shah, Nigam H.
author_sort Lu, Jonathan H.
collection PubMed
description IMPORTANCE: Various model reporting guidelines have been proposed to ensure clinical prediction models are reliable and fair. However, no consensus exists about which model details are essential to report, and commonalities and differences among reporting guidelines have not been characterized. Furthermore, how well documentation of deployed models adheres to these guidelines has not been studied. OBJECTIVES: To assess information requested by model reporting guidelines and whether the documentation for commonly used machine learning models developed by a single vendor provides the information requested. EVIDENCE REVIEW: MEDLINE was queried using machine learning model card and reporting machine learning from November 4 to December 6, 2020. References were reviewed to find additional publications, and publications without specific reporting recommendations were excluded. Similar elements requested for reporting were merged into representative items. Four independent reviewers and 1 adjudicator assessed how often documentation for the most commonly used models developed by a single vendor reported the items. FINDINGS: From 15 model reporting guidelines, 220 unique items were identified that represented the collective reporting requirements. Although 12 items were commonly requested (requested by 10 or more guidelines), 77 items were requested by just 1 guideline. Documentation for 12 commonly used models from a single vendor reported a median of 39% (IQR, 37%-43%; range, 31%-47%) of items from the collective reporting requirements. Many of the commonly requested items had 100% reporting rates, including items concerning outcome definition, area under the receiver operating characteristics curve, internal validation, and intended clinical use. Several items reported half the time or less related to reliability, such as external validation, uncertainty measures, and strategy for handling missing data. Other frequently unreported items related to fairness (summary statistics and subgroup analyses, including for race and ethnicity or sex). CONCLUSIONS AND RELEVANCE: These findings suggest that consistent reporting recommendations for clinical predictive models are needed for model developers to share necessary information for model deployment. The many published guidelines would, collectively, require reporting more than 200 items. Model documentation from 1 vendor reported the most commonly requested items from model reporting guidelines. However, areas for improvement were identified in reporting items related to model reliability and fairness. This analysis led to feedback to the vendor, which motivated updates to the documentation for future users.
format Online
Article
Text
id pubmed-9391954
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher American Medical Association
record_format MEDLINE/PubMed
spelling pubmed-93919542022-09-06 Assessment of Adherence to Reporting Guidelines by Commonly Used Clinical Prediction Models From a Single Vendor: A Systematic Review Lu, Jonathan H. Callahan, Alison Patel, Birju S. Morse, Keith E. Dash, Dev Pfeffer, Michael A. Shah, Nigam H. JAMA Netw Open Original Investigation IMPORTANCE: Various model reporting guidelines have been proposed to ensure clinical prediction models are reliable and fair. However, no consensus exists about which model details are essential to report, and commonalities and differences among reporting guidelines have not been characterized. Furthermore, how well documentation of deployed models adheres to these guidelines has not been studied. OBJECTIVES: To assess information requested by model reporting guidelines and whether the documentation for commonly used machine learning models developed by a single vendor provides the information requested. EVIDENCE REVIEW: MEDLINE was queried using machine learning model card and reporting machine learning from November 4 to December 6, 2020. References were reviewed to find additional publications, and publications without specific reporting recommendations were excluded. Similar elements requested for reporting were merged into representative items. Four independent reviewers and 1 adjudicator assessed how often documentation for the most commonly used models developed by a single vendor reported the items. FINDINGS: From 15 model reporting guidelines, 220 unique items were identified that represented the collective reporting requirements. Although 12 items were commonly requested (requested by 10 or more guidelines), 77 items were requested by just 1 guideline. Documentation for 12 commonly used models from a single vendor reported a median of 39% (IQR, 37%-43%; range, 31%-47%) of items from the collective reporting requirements. Many of the commonly requested items had 100% reporting rates, including items concerning outcome definition, area under the receiver operating characteristics curve, internal validation, and intended clinical use. Several items reported half the time or less related to reliability, such as external validation, uncertainty measures, and strategy for handling missing data. Other frequently unreported items related to fairness (summary statistics and subgroup analyses, including for race and ethnicity or sex). CONCLUSIONS AND RELEVANCE: These findings suggest that consistent reporting recommendations for clinical predictive models are needed for model developers to share necessary information for model deployment. The many published guidelines would, collectively, require reporting more than 200 items. Model documentation from 1 vendor reported the most commonly requested items from model reporting guidelines. However, areas for improvement were identified in reporting items related to model reliability and fairness. This analysis led to feedback to the vendor, which motivated updates to the documentation for future users. American Medical Association 2022-08-19 /pmc/articles/PMC9391954/ /pubmed/35984654 http://dx.doi.org/10.1001/jamanetworkopen.2022.27779 Text en Copyright 2022 Lu JH et al. JAMA Network Open. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the CC-BY License.
spellingShingle Original Investigation
Lu, Jonathan H.
Callahan, Alison
Patel, Birju S.
Morse, Keith E.
Dash, Dev
Pfeffer, Michael A.
Shah, Nigam H.
Assessment of Adherence to Reporting Guidelines by Commonly Used Clinical Prediction Models From a Single Vendor: A Systematic Review
title Assessment of Adherence to Reporting Guidelines by Commonly Used Clinical Prediction Models From a Single Vendor: A Systematic Review
title_full Assessment of Adherence to Reporting Guidelines by Commonly Used Clinical Prediction Models From a Single Vendor: A Systematic Review
title_fullStr Assessment of Adherence to Reporting Guidelines by Commonly Used Clinical Prediction Models From a Single Vendor: A Systematic Review
title_full_unstemmed Assessment of Adherence to Reporting Guidelines by Commonly Used Clinical Prediction Models From a Single Vendor: A Systematic Review
title_short Assessment of Adherence to Reporting Guidelines by Commonly Used Clinical Prediction Models From a Single Vendor: A Systematic Review
title_sort assessment of adherence to reporting guidelines by commonly used clinical prediction models from a single vendor: a systematic review
topic Original Investigation
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9391954/
https://www.ncbi.nlm.nih.gov/pubmed/35984654
http://dx.doi.org/10.1001/jamanetworkopen.2022.27779
work_keys_str_mv AT lujonathanh assessmentofadherencetoreportingguidelinesbycommonlyusedclinicalpredictionmodelsfromasinglevendorasystematicreview
AT callahanalison assessmentofadherencetoreportingguidelinesbycommonlyusedclinicalpredictionmodelsfromasinglevendorasystematicreview
AT patelbirjus assessmentofadherencetoreportingguidelinesbycommonlyusedclinicalpredictionmodelsfromasinglevendorasystematicreview
AT morsekeithe assessmentofadherencetoreportingguidelinesbycommonlyusedclinicalpredictionmodelsfromasinglevendorasystematicreview
AT dashdev assessmentofadherencetoreportingguidelinesbycommonlyusedclinicalpredictionmodelsfromasinglevendorasystematicreview
AT pfeffermichaela assessmentofadherencetoreportingguidelinesbycommonlyusedclinicalpredictionmodelsfromasinglevendorasystematicreview
AT shahnigamh assessmentofadherencetoreportingguidelinesbycommonlyusedclinicalpredictionmodelsfromasinglevendorasystematicreview