Cargando…

(De)troubling transparency: artificial intelligence (AI) for clinical applications

Artificial intelligence (AI) and machine learning (ML) techniques occupy a prominent role in medical research in terms of the innovation and development of new technologies. However, while many perceive AI as a technology of promise and hope—one that is allowing for more early and accurate diagnosis...

Descripción completa

Detalles Bibliográficos
Autores principales: Winter, Peter David, Carusi, Annamaria
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BMJ Publishing Group 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9985768/
https://www.ncbi.nlm.nih.gov/pubmed/35545432
http://dx.doi.org/10.1136/medhum-2021-012318
_version_ 1784901029215600640
author Winter, Peter David
Carusi, Annamaria
author_facet Winter, Peter David
Carusi, Annamaria
author_sort Winter, Peter David
collection PubMed
description Artificial intelligence (AI) and machine learning (ML) techniques occupy a prominent role in medical research in terms of the innovation and development of new technologies. However, while many perceive AI as a technology of promise and hope—one that is allowing for more early and accurate diagnosis—the acceptance of AI and ML technologies in hospitals remains low. A major reason for this is the lack of transparency associated with these technologies, in particular epistemic transparency, which results in AI disturbing or troubling established knowledge practices in clinical contexts. In this article, we describe the development process of one AI application for a clinical setting. We show how epistemic transparency is negotiated and co-produced in close collaboration between AI developers and clinicians and biomedical scientists, forming the context in which AI is accepted as an epistemic operator. Drawing on qualitative research with collaborative researchers developing an AI technology for the early diagnosis of a rare respiratory disease (pulmonary hypertension/PH), this paper examines how including clinicians and clinical scientists in the collaborative practices of AI developers de-troubles transparency. Our research shows how de-troubling transparency occurs in three dimensions of AI development relating to PH: querying of data sets, building software and training the model. The close collaboration results in an AI application that is at once social and technological: it integrates and inscribes into the technology the knowledge processes of the different participants in its development. We suggest that it is a misnomer to call these applications ‘artificial’ intelligence, and that they would be better developed and implemented if they were reframed as forms of sociotechnical intelligence.
format Online
Article
Text
id pubmed-9985768
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher BMJ Publishing Group
record_format MEDLINE/PubMed
spelling pubmed-99857682023-03-06 (De)troubling transparency: artificial intelligence (AI) for clinical applications Winter, Peter David Carusi, Annamaria Med Humanit Original Research Artificial intelligence (AI) and machine learning (ML) techniques occupy a prominent role in medical research in terms of the innovation and development of new technologies. However, while many perceive AI as a technology of promise and hope—one that is allowing for more early and accurate diagnosis—the acceptance of AI and ML technologies in hospitals remains low. A major reason for this is the lack of transparency associated with these technologies, in particular epistemic transparency, which results in AI disturbing or troubling established knowledge practices in clinical contexts. In this article, we describe the development process of one AI application for a clinical setting. We show how epistemic transparency is negotiated and co-produced in close collaboration between AI developers and clinicians and biomedical scientists, forming the context in which AI is accepted as an epistemic operator. Drawing on qualitative research with collaborative researchers developing an AI technology for the early diagnosis of a rare respiratory disease (pulmonary hypertension/PH), this paper examines how including clinicians and clinical scientists in the collaborative practices of AI developers de-troubles transparency. Our research shows how de-troubling transparency occurs in three dimensions of AI development relating to PH: querying of data sets, building software and training the model. The close collaboration results in an AI application that is at once social and technological: it integrates and inscribes into the technology the knowledge processes of the different participants in its development. We suggest that it is a misnomer to call these applications ‘artificial’ intelligence, and that they would be better developed and implemented if they were reframed as forms of sociotechnical intelligence. BMJ Publishing Group 2023-03 2022-05-11 /pmc/articles/PMC9985768/ /pubmed/35545432 http://dx.doi.org/10.1136/medhum-2021-012318 Text en © Author(s) (or their employer(s)) 2023. Re-use permitted under CC BY. Published by BMJ. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/.
spellingShingle Original Research
Winter, Peter David
Carusi, Annamaria
(De)troubling transparency: artificial intelligence (AI) for clinical applications
title (De)troubling transparency: artificial intelligence (AI) for clinical applications
title_full (De)troubling transparency: artificial intelligence (AI) for clinical applications
title_fullStr (De)troubling transparency: artificial intelligence (AI) for clinical applications
title_full_unstemmed (De)troubling transparency: artificial intelligence (AI) for clinical applications
title_short (De)troubling transparency: artificial intelligence (AI) for clinical applications
title_sort (de)troubling transparency: artificial intelligence (ai) for clinical applications
topic Original Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9985768/
https://www.ncbi.nlm.nih.gov/pubmed/35545432
http://dx.doi.org/10.1136/medhum-2021-012318
work_keys_str_mv AT winterpeterdavid detroublingtransparencyartificialintelligenceaiforclinicalapplications
AT carusiannamaria detroublingtransparencyartificialintelligenceaiforclinicalapplications