Cargando…

Piloting a Survey-Based Assessment of Transparency and Trustworthiness with Three Medical AI Tools

Artificial intelligence (AI) offers the potential to support healthcare delivery, but poorly trained or validated algorithms bear risks of harm. Ethical guidelines stated transparency about model development and validation as a requirement for trustworthy AI. Abundant guidance exists to provide tran...

Descripción completa

Detalles Bibliográficos
Autores principales: Fehr, Jana, Jaramillo-Gutierrez, Giovanna, Oala, Luis, Gröschel, Matthias I., Bierwirth, Manuel, Balachandran, Pradeep, Werneck-Leite, Alixandro, Lippert, Christoph
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9601535/
https://www.ncbi.nlm.nih.gov/pubmed/36292369
http://dx.doi.org/10.3390/healthcare10101923
_version_ 1784817089902542848
author Fehr, Jana
Jaramillo-Gutierrez, Giovanna
Oala, Luis
Gröschel, Matthias I.
Bierwirth, Manuel
Balachandran, Pradeep
Werneck-Leite, Alixandro
Lippert, Christoph
author_facet Fehr, Jana
Jaramillo-Gutierrez, Giovanna
Oala, Luis
Gröschel, Matthias I.
Bierwirth, Manuel
Balachandran, Pradeep
Werneck-Leite, Alixandro
Lippert, Christoph
author_sort Fehr, Jana
collection PubMed
description Artificial intelligence (AI) offers the potential to support healthcare delivery, but poorly trained or validated algorithms bear risks of harm. Ethical guidelines stated transparency about model development and validation as a requirement for trustworthy AI. Abundant guidance exists to provide transparency through reporting, but poorly reported medical AI tools are common. To close this transparency gap, we developed and piloted a framework to quantify the transparency of medical AI tools with three use cases. Our framework comprises a survey to report on the intended use, training and validation data and processes, ethical considerations, and deployment recommendations. The transparency of each response was scored with either 0, 0.5, or 1 to reflect if the requested information was not, partially, or fully provided. Additionally, we assessed on an analogous three-point scale if the provided responses fulfilled the transparency requirement for a set of trustworthiness criteria from ethical guidelines. The degree of transparency and trustworthiness was calculated on a scale from 0% to 100%. Our assessment of three medical AI use cases pin-pointed reporting gaps and resulted in transparency scores of 67% for two use cases and one with 59%. We report anecdotal evidence that business constraints and limited information from external datasets were major obstacles to providing transparency for the three use cases. The observed transparency gaps also lowered the degree of trustworthiness, indicating compliance gaps with ethical guidelines. All three pilot use cases faced challenges to provide transparency about medical AI tools, but more studies are needed to investigate those in the wider medical AI sector. Applying this framework for an external assessment of transparency may be infeasible if business constraints prevent the disclosure of information. New strategies may be necessary to enable audits of medical AI tools while preserving business secrets.
format Online
Article
Text
id pubmed-9601535
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-96015352022-10-27 Piloting a Survey-Based Assessment of Transparency and Trustworthiness with Three Medical AI Tools Fehr, Jana Jaramillo-Gutierrez, Giovanna Oala, Luis Gröschel, Matthias I. Bierwirth, Manuel Balachandran, Pradeep Werneck-Leite, Alixandro Lippert, Christoph Healthcare (Basel) Article Artificial intelligence (AI) offers the potential to support healthcare delivery, but poorly trained or validated algorithms bear risks of harm. Ethical guidelines stated transparency about model development and validation as a requirement for trustworthy AI. Abundant guidance exists to provide transparency through reporting, but poorly reported medical AI tools are common. To close this transparency gap, we developed and piloted a framework to quantify the transparency of medical AI tools with three use cases. Our framework comprises a survey to report on the intended use, training and validation data and processes, ethical considerations, and deployment recommendations. The transparency of each response was scored with either 0, 0.5, or 1 to reflect if the requested information was not, partially, or fully provided. Additionally, we assessed on an analogous three-point scale if the provided responses fulfilled the transparency requirement for a set of trustworthiness criteria from ethical guidelines. The degree of transparency and trustworthiness was calculated on a scale from 0% to 100%. Our assessment of three medical AI use cases pin-pointed reporting gaps and resulted in transparency scores of 67% for two use cases and one with 59%. We report anecdotal evidence that business constraints and limited information from external datasets were major obstacles to providing transparency for the three use cases. The observed transparency gaps also lowered the degree of trustworthiness, indicating compliance gaps with ethical guidelines. All three pilot use cases faced challenges to provide transparency about medical AI tools, but more studies are needed to investigate those in the wider medical AI sector. Applying this framework for an external assessment of transparency may be infeasible if business constraints prevent the disclosure of information. New strategies may be necessary to enable audits of medical AI tools while preserving business secrets. MDPI 2022-09-30 /pmc/articles/PMC9601535/ /pubmed/36292369 http://dx.doi.org/10.3390/healthcare10101923 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Fehr, Jana
Jaramillo-Gutierrez, Giovanna
Oala, Luis
Gröschel, Matthias I.
Bierwirth, Manuel
Balachandran, Pradeep
Werneck-Leite, Alixandro
Lippert, Christoph
Piloting a Survey-Based Assessment of Transparency and Trustworthiness with Three Medical AI Tools
title Piloting a Survey-Based Assessment of Transparency and Trustworthiness with Three Medical AI Tools
title_full Piloting a Survey-Based Assessment of Transparency and Trustworthiness with Three Medical AI Tools
title_fullStr Piloting a Survey-Based Assessment of Transparency and Trustworthiness with Three Medical AI Tools
title_full_unstemmed Piloting a Survey-Based Assessment of Transparency and Trustworthiness with Three Medical AI Tools
title_short Piloting a Survey-Based Assessment of Transparency and Trustworthiness with Three Medical AI Tools
title_sort piloting a survey-based assessment of transparency and trustworthiness with three medical ai tools
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9601535/
https://www.ncbi.nlm.nih.gov/pubmed/36292369
http://dx.doi.org/10.3390/healthcare10101923
work_keys_str_mv AT fehrjana pilotingasurveybasedassessmentoftransparencyandtrustworthinesswiththreemedicalaitools
AT jaramillogutierrezgiovanna pilotingasurveybasedassessmentoftransparencyandtrustworthinesswiththreemedicalaitools
AT oalaluis pilotingasurveybasedassessmentoftransparencyandtrustworthinesswiththreemedicalaitools
AT groschelmatthiasi pilotingasurveybasedassessmentoftransparencyandtrustworthinesswiththreemedicalaitools
AT bierwirthmanuel pilotingasurveybasedassessmentoftransparencyandtrustworthinesswiththreemedicalaitools
AT balachandranpradeep pilotingasurveybasedassessmentoftransparencyandtrustworthinesswiththreemedicalaitools
AT werneckleitealixandro pilotingasurveybasedassessmentoftransparencyandtrustworthinesswiththreemedicalaitools
AT lippertchristoph pilotingasurveybasedassessmentoftransparencyandtrustworthinesswiththreemedicalaitools