Cargando…

Enhancing trust in AI through industry self-governance

Artificial intelligence (AI) is critical to harnessing value from exponentially growing health and healthcare data. Expectations are high for AI solutions to effectively address current health challenges. However, there have been prior periods of enthusiasm for AI followed by periods of disillusionm...

Descripción completa

Detalles Bibliográficos
Autores principales: Roski, Joachim, Maier, Ezekiel J, Vigilante, Kevin, Kane, Elizabeth A, Matheny, Michael E
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8661431/
https://www.ncbi.nlm.nih.gov/pubmed/33895824
http://dx.doi.org/10.1093/jamia/ocab065
_version_ 1784613365010661376
author Roski, Joachim
Maier, Ezekiel J
Vigilante, Kevin
Kane, Elizabeth A
Matheny, Michael E
author_facet Roski, Joachim
Maier, Ezekiel J
Vigilante, Kevin
Kane, Elizabeth A
Matheny, Michael E
author_sort Roski, Joachim
collection PubMed
description Artificial intelligence (AI) is critical to harnessing value from exponentially growing health and healthcare data. Expectations are high for AI solutions to effectively address current health challenges. However, there have been prior periods of enthusiasm for AI followed by periods of disillusionment, reduced investments, and progress, known as “AI Winters.” We are now at risk of another AI Winter in health/healthcare due to increasing publicity of AI solutions that are not representing touted breakthroughs, and thereby decreasing trust of users in AI. In this article, we first highlight recently published literature on AI risks and mitigation strategies that would be relevant for groups considering designing, implementing, and promoting self-governance. We then describe a process for how a diverse group of stakeholders could develop and define standards for promoting trust, as well as AI risk-mitigating practices through greater industry self-governance. We also describe how adherence to such standards could be verified, specifically through certification/accreditation. Self-governance could be encouraged by governments to complement existing regulatory schema or legislative efforts to mitigate AI risks. Greater adoption of industry self-governance could fill a critical gap to construct a more comprehensive approach to the governance of AI solutions than US legislation/regulations currently encompass. In this more comprehensive approach, AI developers, AI users, and government/legislators all have critical roles to play to advance practices that maintain trust in AI and prevent another AI Winter.
format Online
Article
Text
id pubmed-8661431
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Oxford University Press
record_format MEDLINE/PubMed
spelling pubmed-86614312021-12-10 Enhancing trust in AI through industry self-governance Roski, Joachim Maier, Ezekiel J Vigilante, Kevin Kane, Elizabeth A Matheny, Michael E J Am Med Inform Assoc Perspective Artificial intelligence (AI) is critical to harnessing value from exponentially growing health and healthcare data. Expectations are high for AI solutions to effectively address current health challenges. However, there have been prior periods of enthusiasm for AI followed by periods of disillusionment, reduced investments, and progress, known as “AI Winters.” We are now at risk of another AI Winter in health/healthcare due to increasing publicity of AI solutions that are not representing touted breakthroughs, and thereby decreasing trust of users in AI. In this article, we first highlight recently published literature on AI risks and mitigation strategies that would be relevant for groups considering designing, implementing, and promoting self-governance. We then describe a process for how a diverse group of stakeholders could develop and define standards for promoting trust, as well as AI risk-mitigating practices through greater industry self-governance. We also describe how adherence to such standards could be verified, specifically through certification/accreditation. Self-governance could be encouraged by governments to complement existing regulatory schema or legislative efforts to mitigate AI risks. Greater adoption of industry self-governance could fill a critical gap to construct a more comprehensive approach to the governance of AI solutions than US legislation/regulations currently encompass. In this more comprehensive approach, AI developers, AI users, and government/legislators all have critical roles to play to advance practices that maintain trust in AI and prevent another AI Winter. Oxford University Press 2021-04-25 /pmc/articles/PMC8661431/ /pubmed/33895824 http://dx.doi.org/10.1093/jamia/ocab065 Text en © The Author(s) 2021. Published by Oxford University Press on behalf of the American Medical Informatics Association. https://creativecommons.org/licenses/by-nc-nd/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs licence (http://creativecommons.org/licenses/by-nc-nd/4.0/ (https://creativecommons.org/licenses/by-nc-nd/4.0/) ), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and that the work is properly cited. For commercial re-use, please contact journals.permissions@oup.com
spellingShingle Perspective
Roski, Joachim
Maier, Ezekiel J
Vigilante, Kevin
Kane, Elizabeth A
Matheny, Michael E
Enhancing trust in AI through industry self-governance
title Enhancing trust in AI through industry self-governance
title_full Enhancing trust in AI through industry self-governance
title_fullStr Enhancing trust in AI through industry self-governance
title_full_unstemmed Enhancing trust in AI through industry self-governance
title_short Enhancing trust in AI through industry self-governance
title_sort enhancing trust in ai through industry self-governance
topic Perspective
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8661431/
https://www.ncbi.nlm.nih.gov/pubmed/33895824
http://dx.doi.org/10.1093/jamia/ocab065
work_keys_str_mv AT roskijoachim enhancingtrustinaithroughindustryselfgovernance
AT maierezekielj enhancingtrustinaithroughindustryselfgovernance
AT vigilantekevin enhancingtrustinaithroughindustryselfgovernance
AT kaneelizabetha enhancingtrustinaithroughindustryselfgovernance
AT mathenymichaele enhancingtrustinaithroughindustryselfgovernance